161 resultados para Semi-Representation
Resumo:
Following on from the companion study (Johnson et al., 2006), a photochemical trajectory model (PTM) has been used to simulate the chemical composition of organic aerosol for selected events during the 2003 TORCH (Tropospheric Organic Chemistry Experiment) field campaign. The PTM incorporates the speciated emissions of 124 nonmethane anthropogenic volatile organic compounds (VOC) and three representative biogenic VOC, a highly-detailed representation of the atmospheric degradation of these VOC, the emission of primary organic aerosol (POA) material and the formation of secondary organic aerosol (SOA) material. SOA formation was represented by the transfer of semi and non-volatile oxidation products from the gas-phase to a condensed organic aerosol-phase, according to estimated thermodynamic equilibrium phase-partitioning characteristics for around 2000 reaction products. After significantly scaling all phase-partitioning coefficients, and assuming a persistent background organic aerosol (both required in order to match the observed organic aerosol loadings), the detailed chemical composition of the simulated SOA has been investigated in terms of intermediate oxygenated species in the Master Chemical Mechanism, version 3.1 ( MCM v3.1). For the various case studies considered, 90% of the simulated SOA mass comprises between ca. 70 and 100 multifunctional oxygenated species derived, in varying amounts, from the photooxidation of VOC of anthropogenic and biogenic origin. The anthropogenic contribution is dominated by aromatic hydrocarbons and the biogenic contribution by alpha-and beta-pinene (which also constitute surrogates for other emitted monoterpene species). Sensitivity in the simulated mass of SOA to changes in the emission rates of anthropogenic and biogenic VOC has also been investigated for 11 case study events, and the results have been compared to the detailed chemical composition data. The role of accretion chemistry in SOA formation, and its implications for the results of the present investigation, is discussed.
Resumo:
A photochemical trajectory model has been used to simulate the chemical evolution of air masses arriving at the TORCH field campaign site in the southern UK during late July and August 2003, a period which included a widespread and prolonged photochemical pollution episode. The model incorporates speciated emissions of 124 nonmethane anthropogenic VOC and three representative biogenic VOC, coupled with a comprehensive description of the chemistry of their degradation. A representation of the gas/aerosol absorptive partitioning of ca. 2000 oxygenated organic species generated in the Master Chemical Mechanism (MCM v3.1) has been implemented, allowing simulation of the contribution to organic aerosol (OA) made by semi- and non-volatile products of VOC oxidation; emissions of primary organic aerosol (POA) and elemental carbon (EC) are also represented. Simulations of total OA mass concentrations in nine case study events (optimised by comparison with observed hourly-mean mass loadings derived from aerosol mass spectrometry measurements) imply that the OA can be ascribed to three general sources: (i) POA emissions; (ii) a '' ubiquitous '' background concentration of 0.7 mu g m(-3); and (iii) gas-to-aerosol transfer of lower volatility products of VOC oxidation generated by the regional scale processing of emitted VOC, but with all partitioning coefficients increased by a species-independent factor of 500. The requirement to scale the partitioning coefficients, and the implied background concentration, are both indicative of the occurrence of chemical processes within the aerosol which allow the oxidised organic species to react by association and/or accretion reactions which generate even lower volatility products, leading to a persistent, non-volatile secondary organic aerosol (SOA). The contribution of secondary organic material to the simulated OA results in significant elevations in the simulated ratio of organic carbon (OC) to EC, compared with the ratio of 1.1 assigned to the emitted components. For the selected case study events, [OC]/[EC] is calculated to lie in the range 2.7-9.8, values which are comparable with the high end of the range reported in the literature.
Resumo:
The paper is an investigation of the exchange of ideas and information between an architect and building users in the early stages of the building design process before the design brief or any drawings have been produced. The purpose of the research is to gain insight into the type of information users exchange with architects in early design conversations and to better understand the influence the format of design interactions and interactional behaviours have on the exchange of information. We report an empirical study of pre-briefing conversations in which the overwhelming majority of the exchanges were about the functional or structural attributes of space, discussion that touched on the phenomenological, perceptual and the symbolic meanings of space were rare. We explore the contextual features of meetings and the conversational strategies taken by the architect to prompt the users for information and the influence these had on the information provided. Recommendations are made on the format and structure of pre-briefing conversations and on designers' strategies for raising the level of information provided by the user beyond the functional or structural attributes of space.
Resumo:
Background Recent research provides evidence for specific disturbance in feeding and growth in children of mothers with eating disorders. Aim To investigate the impact of maternal eating disorders during the post-natal year on the internal world of children, as expressed in children's representations of self and their mother in pretend mealtime play at 5 years of age. Methods Children of mothers with eating disorders (n = 33) and a comparison group (n = 24) were videotaped enacting a family mealtime in pretend play. Specific classes of children's play representations were coded blind to group membership. Univariate analyses compared the groups on representations of mother and self. Logistic regression explored factors predicting pretend play representations. Results Positive representations of the mother expressed as feeding, eating or body shape themes were more frequent in the index group. There were no other significant group differences in representations. In a logistic regression analysis, current maternal eating psychopathology was the principal predictor of these positive maternal representations. Marital criticism was associated with negative representations of the mother. Conclusions These findings suggest that maternal eating disorders may influence the development of a child's internal world, such that they are more preoccupied with maternal eating concerns. However, more extensive research on larger samples is required to replicate these preliminary findings.
Resumo:
To-be-enacted material is more accessible in tests of recognition and lexical decision than material not intended for action (T. Goschke J. Kuhl, 1993; R. L. Marsh, J. L. Hicks, & M. L. Bink, 1998). This finding has been attributed to the superior status of intention-related information. The current article explores an alternative (action-superiority) account that draws parallels between the intended enactment effect (IEE) and the subject-performed task effect. Using 2 paradigms, the authors observed faster recognition latencies for both enacted and to-be-enacted material. It is crucial to note that there was no evidence of an IEE for items that had already been executed during encoding. The IEE was also eliminated when motor processing was prevented after verbal encoding. These findings suggest an overlap between overt and intended enactment and indicate that motor information may be activated for verbal material in preparation for subsequent execution.
Resumo:
The nature of the spatial representations that underlie simple visually guided actions early in life was investigated in toddlers with Williams syndrome (WS), Down syndrome (DS), and healthy chronological age- and mental age-matched controls, through the use of a "double-step" saccade paradigm. The experiment tested the hypothesis that, compared to typically developing infants and toddlers, and toddlers with DS, those with WS display a deficit in using spatial representations to guide actions. Levels of sustained attention were also measured within these groups, to establish whether differences in levels of engagement influenced performance on the double-step saccade task. The results showed that toddlers with WS were unable to combine extra-retinal information with retinal information to the same extent as the other groups, and displayed evidence of other deficits in saccade planning, suggesting a greater reliance on sub-cortical mechanisms than the other populations. Results also indicated that their exploration of the visual environment is less developed. The sustained attention task revealed shorter and fewer periods of sustained attention in toddlers with DS, but not those with WS, suggesting that WS performance on the double-step saccade task is not explained by poorer engagement. The findings are also discussed in relation to a possible attention disengagement deficit in WS toddlers. Our study highlights the importance of studying genetic disorders early in development. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Time/frequency and temporal analyses have been widely used in biomedical signal processing. These methods represent important characteristics of a signal in both time and frequency domain. In this way, essential features of the signal can be viewed and analysed in order to understand or model the physiological system. Historically, Fourier spectral analyses have provided a general method for examining the global energy/frequency distributions. However, an assumption inherent to these methods is the stationarity of the signal. As a result, Fourier methods are not generally an appropriate approach in the investigation of signals with transient components. This work presents the application of a new signal processing technique, empirical mode decomposition and the Hilbert spectrum, in the analysis of electromyographic signals. The results show that this method may provide not only an increase in the spectral resolution but also an insight into the underlying process of the muscle contraction.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.
Resumo:
When the orthogonal space-time block code (STBC), or the Alamouti code, is applied on a multiple-input multiple-output (MIMO) communications system, the optimum reception can be achieved by a simple signal decoupling at the receiver. The performance, however, deteriorates significantly in presence of co-channel interference (CCI) from other users. In this paper, such CCI problem is overcome by applying the independent component analysis (ICA), a blind source separation algorithm. This is based on the fact that, if the transmission data from every transmit antenna are mutually independent, they can be effectively separated at the receiver with the principle of the blind source separation. Then equivalently, the CCI is suppressed. Although they are not required by the ICA algorithm itself, a small number of training data are necessary to eliminate the phase and order ambiguities at the ICA outputs, leading to a semi-blind approach. Numerical simulation is also shown to verify the proposed ICA approach in the multiuser MIMO system.
Resumo:
The ability of four operational weather forecast models [ECMWF, Action de Recherche Petite Echelle Grande Echelle model (ARPEGE), Regional Atmospheric Climate Model (RACMO), and Met Office] to generate a cloud at the right location and time (the cloud frequency of occurrence) is assessed in the present paper using a two-year time series of observations collected by profiling ground-based active remote sensors (cloud radar and lidar) located at three different sites in western Europe (Cabauw. Netherlands; Chilbolton, United Kingdom; and Palaiseau, France). Particular attention is given to potential biases that may arise from instrumentation differences (especially sensitivity) from one site to another and intermittent sampling. In a second step the statistical properties of the cloud variables involved in most advanced cloud schemes of numerical weather forecast models (ice water content and cloud fraction) are characterized and compared with their counterparts in the models. The two years of observations are first considered as a whole in order to evaluate the accuracy of the statistical representation of the cloud variables in each model. It is shown that all models tend to produce too many high-level clouds, with too-high cloud fraction and ice water content. The midlevel and low-level cloud occurrence is also generally overestimated, with too-low cloud fraction but a correct ice water content. The dataset is then divided into seasons to evaluate the potential of the models to generate different cloud situations in response to different large-scale forcings. Strong variations in cloud occurrence are found in the observations from one season to the same season the following year as well as in the seasonal cycle. Overall, the model biases observed using the whole dataset are still found at seasonal scale, but the models generally manage to well reproduce the observed seasonal variations in cloud occurrence. Overall, models do not generate the same cloud fraction distributions and these distributions do not agree with the observations. Another general conclusion is that the use of continuous ground-based radar and lidar observations is definitely a powerful tool for evaluating model cloud schemes and for a responsive assessment of the benefit achieved by changing or tuning a model cloud