156 resultados para Shared component model
Resumo:
Item noise models of recognition assert that interference at retrieval is generated by the words from the study list. Context noise models of recognition assert that interference at retrieval is generated by the contexts in which the test word has appeared. The authors introduce the bind cue decide model of episodic memory, a Bayesian context noise model, and demonstrate how it can account for data from the item noise and dual-processing approaches to recognition memory. From the item noise perspective, list strength and list length effects, the mirror effect for word frequency and concreteness, and the effects of the similarity of other words in a list are considered. From the dual-processing perspective, process dissociation data on the effects of length, temporal separation of lists, strength, and diagnosticity of context are examined. The authors conclude that the context noise approach to recognition is a viable alternative to existing approaches.
Resumo:
The World Wide Web (WWW) is useful for distributing scientific data. Most existing web data resources organize their information either in structured flat files or relational databases with basic retrieval capabilities. For databases with one or a few simple relations, these approaches are successful, but they can be cumbersome when there is a data model involving multiple relations between complex data. We believe that knowledge-based resources offer a solution in these cases. Knowledge bases have explicit declarations of the concepts in the domain, along with the relations between them. They are usually organized hierarchically, and provide a global data model with a controlled vocabulary, We have created the OWEB architecture for building online scientific data resources using knowledge bases. OWEB provides a shell for structuring data, providing secure and shared access, and creating computational modules for processing and displaying data. In this paper, we describe the translation of the online immunological database MHCPEP into an OWEB system called MHCWeb. This effort involved building a conceptual model for the data, creating a controlled terminology for the legal values for different types of data, and then translating the original data into the new structure. The 0 WEB environment allows for flexible access to the data by both users and computer programs.
Resumo:
The majority of past and current individual-tree growth modelling methodologies have failed to characterise and incorporate structured stochastic components. Rather, they have relied on deterministic predictions or have added an unstructured random component to predictions. In particular, spatial stochastic structure has been neglected, despite being present in most applications of individual-tree growth models. Spatial stochastic structure (also called spatial dependence or spatial autocorrelation) eventuates when spatial influences such as competition and micro-site effects are not fully captured in models. Temporal stochastic structure (also called temporal dependence or temporal autocorrelation) eventuates when a sequence of measurements is taken on an individual-tree over time, and variables explaining temporal variation in these measurements are not included in the model. Nested stochastic structure eventuates when measurements are combined across sampling units and differences among the sampling units are not fully captured in the model. This review examines spatial, temporal, and nested stochastic structure and instances where each has been characterised in the forest biometry and statistical literature. Methodologies for incorporating stochastic structure in growth model estimation and prediction are described. Benefits from incorporation of stochastic structure include valid statistical inference, improved estimation efficiency, and more realistic and theoretically sound predictions. It is proposed in this review that individual-tree modelling methodologies need to characterise and include structured stochasticity. Possibilities for future research are discussed. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
A method involving bubbling of air through a fibrous filter immersed in water has recently been investigated (Agranovski et al. [1]). Experimental results showed that the removal efficiency for ultra-fine aerosols by such filters was greatly increased compared to dry filters. Nuclear Magnetic Resonance (NMR) imaging was used to examine the wet filter and to determine the nature of the gas flow inside the filter (Agranovski et al. [2]). It was found that tortuous preferential pathways (or flow tubes) develop within the filter through which the air flows and the distribution of air and water inside the porous medium has been investigated. The aim of this paper is to investigate the geometry of the pathways and to make estimates of the flow velocities and particle removal efficiency in such pathways. A mathematical model of the flow of air along the preferred pathways has been developed and verified experimentally. Even for the highest realistic gas velocity the flow field was essentially laminar (Re approximate to 250). We solved Laplace's equation for stream function to map trajectories of particles and gas molecules to investigate the possibility of their removal from the carrier.
Resumo:
The Jordan-Wigner fermionization for the one-dimensional Bariev model of three coupled XY chains is formulated. The L-matrix in terms of fermion operators and the R-matrix are presented explicitly. Furthermore, the graded reflection equations and their solutions are discussed.
Resumo:
We measure the spectral properties of a representative sub-sample of 187 quasars, drawn from the Parkes Half-Jansky, Flat-radio-spectrum Sample (PHFS). Quasars with a wide range of rest-frame optical/UV continuum slopes are included in the analysis: their colours range over 2 < B-K < 7. We present composite spectra of red and blue sub-samples of the PHFS quasars. and tabulate their emission line properties. The median Hbeta and [0 111] emission line equivalent widths of the red quasar sub-sample are a factor of ten weaker than those of the blue quasar sub-sample. No significant differences are seen between the equivalent width distributions of the C IV, C III] and Mg 11 lines. Both the colours and the emission line equivalent widths of the red quasars can be explained by the addition of a featureless red synchrotron continuum component to an otherwise normal blue quasar spectrum. The red synchrotron component must have a spectrum at least as red as a power-law of the form F-nu proportional to nu(-2.8). The relative strengths of the blue and red components span two orders of magnitude at rest-frame 500 nm. The blue component is weaker relative to the red component in low optical luminosity sources. This suggests that the fraction of accretion energy going into optical emission from the jet is greater in low luminosity quasars. This correlation between colour and luminosity may be of use in cosmological distance scale work. This synchrotron model does not, however, fit similar to10% of the quasars, which have both red colours and high equivalent width emission lines. We hypothesise that these red, strong-lined quasars have intrinsically weak Big Blue Bumps. There is no discontinuity in spectral properties between the BL Lac objects in our sample and the other quasars. BL Lac objects appear to be the red, low equivalent width tail of a continuous distribution. The synchrotron emission component only dominates the spectrum at longer wavelengths, so existing BL Lac surveys will be biased against high redshift objects. This will affect measurements of BL Lac evolution. The blue PHFS quasars have significantly higher equivalent width C IV, Hbeta and [0 111] emission than a matched sample of optically selected QSOs.
Resumo:
The q-deformed supersymmetric t-J model on a semi-infinite lattice is diagonalized by using the level-one vertex operators of the quantum affine superalgebra U-q[sl(2\1)]. We. give the bosonization of the boundary states. We give an integral expression for the correlation functions of the boundary model, and derive the difference equations which they satisfy.
Resumo:
Time availability is a key concept in relation to volunteering, leading to organisations and governments targeting those outside paid work as a potential source of volunteers. It may be that factors such as a growth in female participation in the labour market and an increase in work hours will lead to more people saying they are simply too busy to volunteer This paper discusses how social and economic change, such as changing work patterns, are impacting on time availability. Using the 1997 ABS Time Use data, it identifies a predictive model of spare time by looking at demographic, life stage and employment related variables. Results confirm that those outside paid work, particularly the young, males and those without partners or children, are the groups most likely to have time to spare. These groups do not currently report high rates of volunteering. The paper concludes by questioning the premise that people will volunteer simply because they have time to spare. This is just one component of a range of motivations and factors that influence the decision to volunteer.
Resumo:
Event-related potentials (ERPs) were recorded while subjects made old/new recognition judgments on new unstudied words and old words which had been presented at study either once ('weak') or three times ('strong'). The probability of an 'old' response was significantly higher for strong than weak words and significantly higher for weak than new words. Comparisons were made initially between ERPs to new, weak and strong words, and subsequently between ERPs associated with six strength-by-response conditions. The N400 component was found to be modulated by memory trace strength in a graded manner. Its amplitude was most negative in new word ERPs and most positive in strong word ERPs. This 'N400 strength effect' was largest at the left parietal electrode (in ear-referenced ERPs). The amplitude of the late positive complex (LPC) effect was sensitive to decision accuracy (and perhaps confidence). Its amplitude was larger in ERPs evoked by words attracting correct versus incorrect recognition decisions. The LPC effect had a left > right, centro-parietal scalp topography (in ear-referenced ERPs). Hence, whereas, the majority of previous ERP studies of episodic recognition have interpreted results from the perspective of dual-process models, we provide alternative interpretations of N400 and LPC old/new effects in terms of memory strength and decisional factor(s). (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper is concerned with the use of scientific visualization methods for the analysis of feedforward neural networks (NNs). Inevitably, the kinds of data associated with the design and implementation of neural networks are of very high dimensionality, presenting a major challenge for visualization. A method is described using the well-known statistical technique of principal component analysis (PCA). This is found to be an effective and useful method of visualizing the learning trajectories of many learning algorithms such as back-propagation and can also be used to provide insight into the learning process and the nature of the error surface.
Resumo:
To reconstruct oceanographic variations in the subtropical South Pacific, 271-year long subseasonal time series of Sr/Ca and delta(18)O were generated from a coral growing at Rarotonga (21.5degreesS, 159.5degreesW). In this case, coral Sr/Ca appears to be an excellent proxy for sea surface temperature (SST) and coral delta(18)O is a function of both SST and seawater delta(18)O composition (delta(18)O(sw)). Here, we focus on extracting the delta(18)O(sw) signal from these proxy records. A method is presented assuming that coral Sr/Ca is solely a function of SST and that coral delta(18)O is a function of both SST and delta(18)O(sw). This method separates the effects of delta(18)O(sw) from SST by breaking the instantaneous changes of coral delta(18)O into separate contributions by instantaneous SST and delta(18)O(sw) changes, respectively. The results show that on average delta(18)O(sw) at Rarotonga explains similar to39% of the variance in delta(18)O and that variations in SST explains the remaining similar to61% of delta(18)O variance. Reconstructed delta(18)O(sw) shows systematic increases in summer months (December-February) consistent with the regional pattern of variations in precipitation and evaporation. The delta(18)O(sw) also shows a positive linear correlation with satellite-derived estimated salinity for the period 1980 to 1997 (r = 0.72). This linear correlation between reconstructed delta(18)O(sw) and salinity makes it possible to use the reconstructed delta(18)O(sw) to estimate the past interannual and decadal salinity changes in this region. Comparisons of coral delta(18)O and delta(18)O(sw) at Rarotonga with the Pacific decadal oscillation index suggest that the decadal and interdecadal salinity and SST variability at Rarotonga appears to be related to basin-scale decadal variability in the Pacific. Copyright (C) 2002 Elsevier Science Ltd.
Resumo:
[1] The physical conditions required to provide for the tectonic stability of cratonic crust and for the relative longevity of deep cratonic lithosphere within a dynamic, convecting mantle are explored through a suite of numerical simulations. The simulations allow chemically distinct continents to reside within the upper thermal boundary layer of a thermally convecting mantle layer. A rheologic formulation, which models both brittle and ductile behavior, is incorporated to allow for plate-like behavior and the associated subduction of oceanic lithosphere. Several mechanisms that may stabilize cratons are considered. The two most often invoked mechanisms, chemical buoyancy and/or high viscosity of cratonic root material, are found to be relatively ineffective if cratons come into contact with subduction zones. High root viscosity can provide for stability and longevity but only within a thick root limit in which the thickness of chemically distinct, high-viscosity cratonic lithosphere exceeds the thickness of old oceanic lithosphere by at least a factor of 2. This end-member implies a very thick mechanical lithosphere for cratons. A high brittle yield stress for cratonic lithosphere as a whole, relative to oceanic lithosphere, is found to be an effective and robust means for providing stability and lithospheric longevity. This mode does not require exceedingly deep strength within cratons. A high yield stress for only the crustal or mantle component of the cratonic lithosphere is found to be less effective as detachment zones can then form at the crust-mantle interface which decreases the longevity potential of cratonic roots. The degree of yield stress variations between cratonic and oceanic lithosphere required for stability and longevity can be decreased if cratons are bordered by continental lithosphere that has a relatively low yield stress, i.e., mobile belts. Simulations that combine all the mechanisms can lead to crustal stability and deep root longevity for model cratons over several mantle overturn times, but the dominant stabilizing factor remains a relatively high brittle yield stress for cratonic lithosphere.
Resumo:
The paper presents a theory for modeling flow in anisotropic, viscous rock. This theory has originally been developed for the simulation of large deformation processes including the folding and kinking of multi-layered visco-elastic rock (Muhlhaus et al. [1,2]). The orientation of slip planes in the context of crystallographic slip is determined by the normal vector - the director - of these surfaces. The model is applied to simulate anisotropic mantle convection. We compare the evolution of flow patterns, Nusselt number and director orientations for isotropic and anisotropic rheologies. In the simulations we utilize two different finite element methodologies: The Lagrangian Integration Point Method Moresi et al [8] and an Eulerian formulation, which we implemented into the finite element based pde solver Fastflo (www.cmis.csiro.au/Fastflo/). The reason for utilizing two different finite element codes was firstly to study the influence of an anisotropic power law rheology which currently is not implemented into the Lagrangian Integration point scheme [8] and secondly to study the numerical performance of Eulerian (Fastflo)- and Lagrangian integration schemes [8]. It turned out that whereas in the Lagrangian method the Nusselt number vs time plot reached only a quasi steady state where the Nusselt number oscillates around a steady state value the Eulerian scheme reaches exact steady states and produces a high degree of alignment (director orientation locally orthogonal to velocity vector almost everywhere in the computational domain). In the simulations emergent anisotropy was strongest in terms of modulus contrast in the up and down-welling plumes. Mechanisms for anisotropic material behavior in the mantle dynamics context are discussed by Christensen [3]. The dominant mineral phases in the mantle generally do not exhibit strong elastic anisotropy but they still may be oriented by the convective flow. Thus viscous anisotropy (the main focus of this paper) may or may not correlate with elastic or seismic anisotropy.