103 resultados para Random Variable
Resumo:
This study attempts to fill the existing gap in the simulation of variable flow distribution systems through developing new pressure governing components. These components are able to capture the actual ever-changing system performance curve in variable flow distribution systems together with the prediction of controversial issues such as starving, over-flow and the lack of controllability on the flow rate of different branches in a hydronic system. The performance of the proposed components is verified using a case study under design and off-design circumstances. Full integration of the new components within the TRNSYS simulation package is another advantage of this study, which makes it more applicable for designers in both the design and commissioning of hydronic systems.
Resumo:
In order to validate the reported precision of space‐based atmospheric composition measurements, validation studies often focus on measurements in the tropical stratosphere, where natural variability is weak. The scatter in tropical measurements can then be used as an upper limit on single‐profile measurement precision. Here we introduce a method of quantifying the scatter of tropical measurements which aims to minimize the effects of short‐term atmospheric variability while maintaining large enough sample sizes that the results can be taken as representative of the full data set. We apply this technique to measurements of O3, HNO3, CO, H2O, NO, NO2, N2O, CH4, CCl2F2, and CCl3F produced by the Atmospheric Chemistry Experiment–Fourier Transform Spectrometer (ACE‐FTS). Tropical scatter in the ACE‐FTS retrievals is found to be consistent with the reported random errors (RREs) for H2O and CO at altitudes above 20 km, validating the RREs for these measurements. Tropical scatter in measurements of NO, NO2, CCl2F2, and CCl3F is roughly consistent with the RREs as long as the effect of outliers in the data set is reduced through the use of robust statistics. The scatter in measurements of O3, HNO3, CH4, and N2O in the stratosphere, while larger than the RREs, is shown to be consistent with the variability simulated in the Canadian Middle Atmosphere Model. This result implies that, for these species, stratospheric measurement scatter is dominated by natural variability, not random error, which provides added confidence in the scientific value of single‐profile measurements.
Resumo:
The redistribution of a finite amount of martian surface dust during global dust storms and in the intervening periods has been modelled in a dust lifting version of the UK Mars General Circulation Model. When using a constant, uniform threshold in the model’s wind stress lifting parameterisation and assuming an unlimited supply of surface dust, multiannual simulations displayed some variability in dust lifting activity from year to year, arising from internal variability manifested in surface wind stress, but dust storms were limited in size and formed within a relatively short seasonal window. Lifting thresholds were then allowed to vary at each model gridpoint, dependent on the rates of emission or deposition of dust. This enhanced interannual variability in dust storm magnitude and timing, such that model storms covered most of the observed ranges in size and initiation date within a single multiannual simulation. Peak storm magnitude in a given year was primarily determined by the availability of surface dust at a number of key sites in the southern hemisphere. The observed global dust storm (GDS) frequency of roughly one in every 3 years was approximately reproduced, but the model failed to generate these GDSs spontaneously in the southern hemisphere, where they have typically been observed to initiate. After several years of simulation, the surface threshold field—a proxy for net change in surface dust density—showed good qualitative agreement with the observed pattern of martian surface dust cover. The model produced a net northward cross-equatorial dust mass flux, which necessitated the addition of an artificial threshold decrease rate in order to allow the continued generation of dust storms over the course of a multiannual simulation. At standard model resolution, for the southward mass flux due to cross-equatorial flushing storms to offset the northward flux due to GDSs on a timescale of ∼3 years would require an increase in the former by a factor of 3–4. Results at higher model resolution and uncertainties in dust vertical profiles mean that quasi-periodic redistribution of dust on such a timescale nevertheless appears to be a plausible explanation for the observed GDS frequency.
Resumo:
What are the precise brain regions supporting the short-term retention of verbal information? A previous functional magnetic resonance imaging (fMRI) study suggested that they may be topographically variable across individuals, occurring, in most, in regions posterior to prefrontal cortex (PFC), and that detection of these regions may be best suited to a single-subject (SS) approach to fMRI analysis (Feredoes and Postle, 2007). In contrast, other studies using spatially normalized group-averaged (SNGA) analyses have localized storage-related activity to PFC. To evaluate the necessity of the regions identified by these two methods, we applied repetitive transcranial magnetic stimulation (rTMS) to SS- and SNGA-identified regions throughout the retention period of a delayed letter-recognition task. Results indicated that rTMS targeting SS analysis-identified regions of left perisylvian and sensorimotor cortex impaired performance, whereas rTMS targeting the SNGA-identified region of left caudal PFC had no effect on performance. Our results support the view that the short-term retention of verbal information can be supported by regions associated with acoustic, lexical, phonological, and speech-based representation of information. They also suggest that the brain bases of some cognitive functions may be better detected by SS than by SNGA approaches to fMRI data analysis.
Resumo:
The link between the Pacific/North American pattern (PNA) and the North Atlantic Oscillation (NAO) is investigated in reanalysis data (NCEP, ERA40) and multi-century CGCM runs for present day climate using three versions of the ECHAM model. PNA and NAO patterns and indices are determined via rotated principal component analysis on monthly mean 500 hPa geopotential height fields using the varimax criteria. On average, the multi-century CGCM simulations show a significant anti-correlation between PNA and NAO. Further, multi-decadal periods with significantly enhanced (high anti-correlation, active phase) or weakened (low correlations, inactive phase) coupling are found in all CGCMs. In the simulated active phases, the storm track activity near Newfoundland has a stronger link with the PNA variability than during the inactive phases. On average, the reanalysis datasets show no significant anti-correlation between PNA and NAO indices, but during the sub-period 1973–1994 a significant anti-correlation is detected, suggesting that the present climate could correspond to an inactive period as detected in the CGCMs. An analysis of possible physical mechanisms suggests that the link between the patterns is established by the baroclinic waves forming the North Atlantic storm track. The geopotential height anomalies associated with negative PNA phases induce an increased advection of warm and moist air from the Gulf of Mexico and cold air from Canada. Both types of advection contribute to increase baroclinicity over eastern North America and also to increase the low level latent heat content of the warm air masses. Thus, growth conditions for eddies at the entrance of the North Atlantic storm track are enhanced. Considering the average temporal development during winter for the CGCM, results show an enhanced Newfoundland storm track maximum in the early winter for negative PNA, followed by a downstream enhancement of the Atlantic storm track in the subsequent months. In active (passive) phases, this seasonal development is enhanced (suppressed). As the storm track over the central and eastern Atlantic is closely related to the NAO variability, this development can be explained by the shift of the NAO index to more positive values.
Resumo:
Ensemble learning can be used to increase the overall classification accuracy of a classifier by generating multiple base classifiers and combining their classification results. A frequently used family of base classifiers for ensemble learning are decision trees. However, alternative approaches can potentially be used, such as the Prism family of algorithms that also induces classification rules. Compared with decision trees, Prism algorithms generate modular classification rules that cannot necessarily be represented in the form of a decision tree. Prism algorithms produce a similar classification accuracy compared with decision trees. However, in some cases, for example, if there is noise in the training and test data, Prism algorithms can outperform decision trees by achieving a higher classification accuracy. However, Prism still tends to overfit on noisy data; hence, ensemble learners have been adopted in this work to reduce the overfitting. This paper describes the development of an ensemble learner using a member of the Prism family as the base classifier to reduce the overfitting of Prism algorithms on noisy datasets. The developed ensemble classifier is compared with a stand-alone Prism classifier in terms of classification accuracy and resistance to noise.
Resumo:
Lying to participants offers an experimenter the enticing prospect of making “others' behaviour” a controlled variable, but is eschewed by experimental economists because it may pollute the pool of subjects. This paper proposes and implements a new experimental design, the Conditional Information Lottery, which offers all the benefits of deception without actually deceiving anyone. The design should be suitable for most economics experiments, and works by a modification of an already standard device, the Random Lottery incentive system. The deceptive scenarios of designs which use deceit are replaced with fictitious scenarios, each of which, from a subject's viewpoint, has a chance of being true. The design is implemented in a sequential play public good experiment prompted by Weimann's (1994) result, from a deceptive design, that subjects are more sensitive to freeriding than cooperation on the part of others. The experiment provides similar results to Weimann's, in that subjects are at least as cooperative when uninformed about others' behaviour as they are if reacting to high contributions. No deception is used and the data cohere well both internally and with other public goods experiments. In addition, simultaneous play is found to be more efficient than sequential play, and subjects contribute less at the end of a sequence than at the start. The results suggest pronounced elements of overconfidence, egoism and (biased) reciprocity in behaviour, which may explain decay in contributions in repeated play designs. The experiment shows there is a workable alternative to deception.
Resumo:
The relative contributions of five variables (Stereoscopy, screen size, field of view, level of realism and level of detail) of virtual reality systems on spatial comprehension and presence are evaluated here. Using a variable-centered approach instead of an object-centric view as its theoretical basis, the contributions of these five variables and their two-way interactions are estimated through a 25-1 fractional factorial experiment (screening design) of resolution V with 84 subjects. The experiment design, procedure, measures used, creation of scales and indices, results of statistical analysis, their meaning and agenda for future research are elaborated.
Resumo:
In the present paper we study the approximation of functions with bounded mixed derivatives by sparse tensor product polynomials in positive order tensor product Sobolev spaces. We introduce a new sparse polynomial approximation operator which exhibits optimal convergence properties in L2 and tensorized View the MathML source simultaneously on a standard k-dimensional cube. In the special case k=2 the suggested approximation operator is also optimal in L2 and tensorized H1 (without essential boundary conditions). This allows to construct an optimal sparse p-version FEM with sparse piecewise continuous polynomial splines, reducing the number of unknowns from O(p2), needed for the full tensor product computation, to View the MathML source, required for the suggested sparse technique, preserving the same optimal convergence rate in terms of p. We apply this result to an elliptic differential equation and an elliptic integral equation with random loading and compute the covariances of the solutions with View the MathML source unknowns. Several numerical examples support the theoretical estimates.
Resumo:
We present simultaneous multicolor infrared and optical photometry of the black hole X-ray transient XTE J1118+480 during its short 2005 January outburst, supported by simultaneous X-ray observations. The variability is dominated by short timescales, ~10 s, although a weak superhump also appears to be present in the optical. The optical rapid variations, at least, are well correlated with those in X-rays. Infrared JHKs photometry, as in the previous outburst, exhibits especially large-amplitude variability. The spectral energy distribution (SED) of the variable infrared component can be fitted with a power law of slope α=-0.78+/-0.07, where F_ν~ν^α. There is no compelling evidence for evolution in the slope over five nights, during which time the source brightness decayed along almost the same track as seen in variations within the nights. We conclude that both short-term variability and longer timescale fading are dominated by a single component of constant spectral shape. We cannot fit the SED of the IR variability with a credible thermal component, either optically thick or thin. This IR SED is, however, approximately consistent with optically thin synchrotron emission from a jet. These observations therefore provide indirect evidence to support jet-dominated models for XTE J1118+480 and also provide a direct measurement of the slope of the optically thin emission, which is impossible, based on the average spectral energy distribution alone.
Resumo:
The question of what explains variation in expenditures on Active Labour Market Programs (ALMPs) has attracted significant scholarship in recent years. Significant insights have been gained with respect to the role of employers, unions and dual labour markets, openness, and partisanship. However, there remain significant disagreements with respects to key explanatory variables such the role of unions or the impact of partisanship. Qualitative studies have shown that there are both good conceptual reasons as well as historical evidence that different ALMPs are driven by different dynamics. There is little reason to believe that vastly different programs such as training and employment subsidies are driven by similar structural, interest group or indeed partisan dynamics. The question is therefore whether different ALMPs have the same correlation with different key explanatory variables identified in the literature? Using regression analysis, this paper shows that the explanatory variables identified by the literature have different relation to distinct ALMPs. This refinement adds significant analytical value and shows that disagreements are at least partly due to a dependent variable problem of ‘over-aggregation’.
Resumo:
Bayesian analysis is given of an instrumental variable model that allows for heteroscedasticity in both the structural equation and the instrument equation. Specifically, the approach for dealing with heteroscedastic errors in Geweke (1993) is extended to the Bayesian instrumental variable estimator outlined in Rossi et al. (2005). Heteroscedasticity is treated by modelling the variance for each error using a hierarchical prior that is Gamma distributed. The computation is carried out by using a Markov chain Monte Carlo sampling algorithm with an augmented draw for the heteroscedastic case. An example using real data illustrates the approach and shows that ignoring heteroscedasticity in the instrument equation when it exists may lead to biased estimates.
Resumo:
The hypothesis that pronouns can be resolved via either the syntax or the discourse representation has played an important role in linguistic accounts of pronoun interpretation (e.g. Grodzinsky & Reinhart, 1993). We report the results of an eye-movement monitoring study investigating the relative timing of syntactically-mediated variable binding and discourse-based coreference assignment during pronoun resolution. We examined whether ambiguous pronouns are preferentially resolved via either the variable binding or coreference route, and in particular tested the hypothesis that variable binding should always be computed before coreference assignment. Participants’ eye movements were monitored while they read sentences containing a pronoun and two potential antecedents, a c-commanding quantified noun phrase and a non c-commanding proper name. Gender congruence between the pronoun and either of the two potential antecedents was manipulated as an experimental diagnostic for dependency formation. In two experiments, we found that participants’ reading times were reliably longer when the linearly closest antecedent mismatched in gender with the pronoun. These findings fail to support the hypothesis that variable binding is computed before coreference assignment, and instead suggest that antecedent recency plays an important role in affecting the extent to which a variable binding antecedent is considered. We discuss these results in relation to models of memory retrieval during sentence comprehension, and interpret the antecedent recency preference as an example of forgetting over time.