869 resultados para traffic modelling and simulation. video processing
Resumo:
We survey the literature on spatial bio-economic and land-use modelling and assess its thematic development. Unobserved site-specific heterogeneity is a feature of almost all the surveyed works, and this feature, it seems, has stimulated significant methodological innovation. In an attempt to improve the suitability with which the prototype incorporates heterogeneity, we consider modelling alternatives and extensions. We discuss solutions and conjecture others.
Resumo:
This paper presents a paralleled Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA, Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We introduced hashtable into video processing and completed parallel implementation. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. It discusses how parallel video coding on load balanced multiprocessor systems can help, especially on motion estimation. The effect of load balancing for improved performance is discussed. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.
Resumo:
This paper presents an improved parallel Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. Motion Vectors (MV) are generated from the first-pass LHMEA and used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We used bashtable into video processing and completed parallel implementation. The hashtable structure of LHMEA is improved compared to the original TPA and LHMEA. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. The implementation contains spatial and temporal approaches. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.
Resumo:
Eddy-covariance measurements of carbon dioxide fluxes were taken semi-continuously between October 2006 and May 2008 at 190 m height in central London (UK) to quantify emissions and study their controls. Inner London, with a population of 8.2 million (~5000 inhabitants per km2) is heavily built up with 8% vegetation cover within the central boroughs. CO2 emissions were found to be mainly controlled by fossil fuel combustion (e.g. traffic, commercial and domestic heating). The measurement period allowed investigation of both diurnal patterns and seasonal trends. Diurnal averages of CO2 fluxes were found to be highly correlated to traffic. However changes in heating-related natural gas consumption and, to a lesser extent, photosynthetic activity that controlled the seasonal variability. Despite measurements being taken at ca. 22 times the mean building height, coupling with street level was adequate, especially during daytime. Night-time saw a higher occurrence of stable or neutral stratification, especially in autumn and winter, which resulted in data loss in post-processing. No significant difference was found between the annual estimate of net exchange of CO2 for the expected measurement footprint and the values derived from the National Atmospheric Emissions Inventory (NAEI), with daytime fluxes differing by only 3%. This agreement with NAEI data also supported the use of the simple flux footprint model which was applied to the London site; this also suggests that individual roughness elements did not significantly affect the measurements due to the large ratio of measurement height to mean building height.
Resumo:
We have performed atomistic molecular dynamics simulations of an anionic sodium dodecyl sulfate (SDS) micelle and a nonionic poly(ethylene oxide) (PEO) polymer in aqueous solution. The micelle consisted of 60 surfactant molecules, and the polymer chain lengths varied from 20 to 40 monomers. The force field parameters for PEO were adjusted by using 1,2-dimethoxymethane (DME) as a model compound and matching its hydration enthalpy and conformational behavior to experiment. Excellent agreement with previous experimental and simulation work was obtained through these modifications. The simulated scaling behavior of the PEO radius of gyration was also in close agreement with experimental results. The SDS-PEO simulations show that the polymer resides on the micelle surface and at the hydrocarbon-water interface, leading to a selective reduction in the hydrophobic contribution to the solvent-accessible surface area of the micelle. The association is mainly driven by hydrophobic interactions between the polymer and surfactant tails, while the interaction between the polymer and sulfate headgroups on the micelle surface is weak. The 40-monomer chain is mostly wrapped around the micelle, and nearly 90% of the monomers are adsorbed at low PEO concentration. Simulations were also performed with multiple 20-monomer chains, and gradual addition of polymer indicates that about 120 monomers are required to saturate the micelle surface. The stoichiometry of the resulting complex is in close agreement with experimental results, and the commonly accepted "beaded necklace" structure of the SDS-PEO complex is recovered by our simulations.
Resumo:
The effect of terrestrial television multipath signals on the intermediate frequency (IF) vestigial sideband filter and the video detector are discussed. A new detector is proposed which, by processing the detected phase quadrature information, derives the correct phase for synchronous detection in the presence of multipath effects. This minimizes dispersion and produces a detected video signal with the linear addition of any ghosts.
Resumo:
Models of normal word production are well specified about the effects of frequency of linguistic stimuli on lexical access, but are less clear regarding the same effects on later stages of word production, particularly word articulation. In aphasia, this lack of specificity of down-stream frequency effects is even more noticeable because there is relatively limited amount of data on the time course of frequency effects for this population. This study begins to fill this gap by comparing the effects of variation of word frequency (lexical, whole word) and bigram frequency (sub-lexical, within word) on word production abilities in ten normal speakers and eight mild–moderate individuals with aphasia. In an immediate repetition paradigm, participants repeated single monosyllabic words in which word frequency (high or low) was crossed with bigram frequency (high or low). Indices for mapping the time course for these effects included reaction time (RT) for linguistic processing and motor preparation, and word duration (WD) for speech motor performance (word articulation time). The results indicated that individuals with aphasia had significantly longer RT and WD compared to normal speakers. RT showed a significant main effect only for word frequency (i.e., high-frequency words had shorter RT). WD showed significant main effects of word and bigram frequency; however, contrary to our expectations, high-frequency items had longer WD. Further investigation of WD revealed that independent of the influence of word and bigram frequency, vowel type (tense or lax) had the expected effect on WD. Moreover, individuals with aphasia differed from control speakers in their ability to implement tense vowel duration, even though they could produce an appropriate distinction between tense and lax vowels. The results highlight the importance of using temporal measures to identify subtle deficits in linguistic and speech motor processing in aphasia, the crucial role of phonetic characteristics of stimuli set in studying speech production and the need for the language production models to account more explicitly for word articulation.
Resumo:
This study investigates the production and on-line processing of English tense morphemes by sequential bilingual (L2) Turkish-speaking children with more than three years of exposure to English. Thirty nine 6-9-year-old L2 children and 28 typically developing age-matched monolingual (L1) children were administered the production component for third person –s and past tense of the Test for Early Grammatical Impairment (Rice & Wexler, 1996) and participated in an on-line word-monitoring task involving grammatical and ungrammatical sentences with presence/omission of tense (third person –s, past tense -ed) and non-tense (progressive –ing, possessive ‘s) morphemes. The L2 children’s performance on the on-line task was compared to that of children with Specific Language Impairment (SLI) in Montgomery & Leonard (1998, 2006) to ascertain similarities and differences between the two populations. Results showed that the L2 children were sensitive to the ungrammaticality induced by the omission of tense morphemes, despite variable production. This reinforces the claim about intact underlying syntactic representations in child L2 acquisition despite non target-like production (Haznedar & Schwartz, 1997).
Resumo:
The Chartered Institute of Building Service Engineers (CIBSE) produced a technical memorandum (TM36) presenting research on future climate impacting building energy use and thermal comfort. One climate projection for each of four CO2 emissions scenario were used in TM36, so providing a deterministic outlook. As part of the UK Climate Impacts Programme (UKCIP) probabilistic climate projections are being studied in relation to building energy simulation techniques. Including uncertainty in climate projections is considered an important advance to climate impacts modelling and is included in the latest UKCIP data (UKCP09). Incorporating the stochastic nature of these new climate projections in building energy modelling requires a significant increase in data handling and careful statistical interpretation of the results to provide meaningful conclusions. This paper compares the results from building energy simulations when applying deterministic and probabilistic climate data. This is based on two case study buildings: (i) a mixed-mode office building with exposed thermal mass and (ii) a mechanically ventilated, light-weight office building. Building (i) represents an energy efficient building design that provides passive and active measures to maintain thermal comfort. Building (ii) relies entirely on mechanical means for heating and cooling, with its light-weight construction raising concern over increased cooling loads in a warmer climate. Devising an effective probabilistic approach highlighted greater uncertainty in predicting building performance, depending on the type of building modelled and the performance factors under consideration. Results indicate that the range of calculated quantities depends not only on the building type but is strongly dependent on the performance parameters that are of interest. Uncertainty is likely to be particularly marked with regard to thermal comfort in naturally ventilated buildings.
Comparing the thermal performance of horizontal slinky-loop and vertical slinky-loop heat exchangers
Resumo:
The heat pump market in the UK has grown rapidly over the last few years. Performance analyses of vertical ground-loop heat exchanger configurations have been widely carried out using both numerical modelling and experiments. However, research findings and design recommendations on horizontal slinky-loop and vertical slinky-loop heat exchangers are far fewer compared with those for vertical ground-loop heat exchanger configurations, especially where the long-term operation of the systems is concerned. The paper presents the results obtained from a numerical simulation for the horizontal slinky-loop and vertical slinky-loop heat exchangers of a ground-source heat pump system. A three-dimensional numerical heat transfer model was developed to study the thermal performance of various heat exchanger configurations. The influence of the loop pitch (loop spacing) and the depth of a vertical slinky-loop installation were investigated and the thermal performance and excavation work required for the horizontal and vertical slinky-loop heat exchangers were compared. The influence of the installation depth for vertical slinky-loop configurations was also investigated. The results of this study show that the influence of the installation depth of the vertical slinky-loop heat exchanger on the thermal performance of the system is small. The maximum difference in the thermal performance between the vertical and horizontal slinky-loop heat exchangers with the same loop diameter and loop pitch is less than 5%.
Resumo:
Models for water transfer in the crop-soil system are key components of agro-hydrological models for irrigation, fertilizer and pesticide practices. Many of the hydrological models for water transfer in the crop-soil system are either too approximate due to oversimplified algorithms or employ complex numerical schemes. In this paper we developed a simple and sufficiently accurate algorithm which can be easily adopted in agro-hydrological models for the simulation of water dynamics. We used a dual crop coefficient approach proposed by the FAO for estimating potential evaporation and transpiration, and a dynamic model for calculating relative root length distribution on a daily basis. In a small time step of 0.001 d, we implemented algorithms separately for actual evaporation, root water uptake and soil water content redistribution by decoupling these processes. The Richards equation describing soil water movement was solved using an integration strategy over the soil layers instead of complex numerical schemes. This drastically simplified the procedures of modeling soil water and led to much shorter computer codes. The validity of the proposed model was tested against data from field experiments on two contrasting soils cropped with wheat. Good agreement was achieved between measurement and simulation of soil water content in various depths collected at intervals during crop growth. This indicates that the model is satisfactory in simulating water transfer in the crop-soil system, and therefore can reliably be adopted in agro-hydrological models. Finally we demonstrated how the developed model could be used to study the effect of changes in the environment such as lowering the groundwater table caused by the construction of a motorway on crop transpiration. (c) 2009 Elsevier B.V. All rights reserved.
The capability-affordance model: a method for analysis and modelling of capabilities and affordances
Resumo:
Existing capability models lack qualitative and quantitative means to compare business capabilities. This paper extends previous work and uses affordance theories to consistently model and analyse capabilities. We use the concept of objective and subjective affordances to model capability as a tuple of a set of resource affordance system mechanisms and action paths, dependent on one or more critical affordance factors. We identify an affordance chain of subjective affordances by which affordances work together to enable an action and an affordance path that links action affordances to create a capability system. We define the mechanism and path underlying capability. We show how affordance modelling notation, AMN, can represent affordances comprising a capability. We propose a method to quantitatively and qualitatively compare capabilities using efficiency, effectiveness and quality metrics. The method is demonstrated by a medical example comparing the capability of syringe and needless anaesthetic systems.
Resumo:
ERPs were elicited to (1) words, (2) pseudowords derived from these words, and (3) nonwords with no lexical neighbors, in a task involving listening to immediately repeated auditory stimuli. There was a significant early (P200) effect of phonotactic probability in the first auditory presentation, which discriminated words and pseudowords from nonwords; and a significant somewhat later (N400) effect of lexicality, which discriminated words from pseudowords and nonwords. There was no reliable effect of lexicality in the ERPs to the second auditory presentation. We conclude that early sublexical phonological processing differed according to phonotactic probability of the stimuli, and that lexically-based redintegration occurred for words but did not occur for pseudowords or nonwords. Thus, in online word recognition and immediate retrieval, phonological and/or sublexical processing plays a more important role than lexical level redintegration.