49 resultados para Logic and Linguistic Simulation
Resumo:
Chongqing is the largest central-government-controlled municipality in China, which is now under going a rapid urbanization. The question remains open: What are the consequences of such rapid urbanization in Chongqing in terms of urban microclimates? An integrated study comprising three different research approaches is adopted in the present paper. By analyzing the observed annual climate data, an average rising trend of 0.10◦C/decade was found for the annual mean temperature from 1951 to 2010 in Chongqing,indicating a higher degree of urban warming in Chongqing. In addition, two complementary types of field measurements were conducted: fixed weather stations and mobile transverse measurement. Numerical simulations using a house-developed program are able to predict the urban air temperature in Chongqing.The urban heat island intensity in Chongqing is stronger in summer compared to autumn and winter.The maximum urban heat island intensity occurs at around midnight, and can be as high as 2.5◦C. In the day time, an urban cool island exists. Local greenery has a great impact on the local thermal environment.Urban green spaces can reduce urban air temperature and therefore mitigate the urban heat island. The cooling effect of an urban river is limited in Chongqing, as both sides of the river are the most developed areas, but the relative humidity is much higher near the river compared with the places far from it.
Resumo:
The research project used to frame discussion in this chapter was a doctoral study of the experiences of English primary school teachers teaching pupils whose home language was not English in their previously monolingual classrooms. They taught in a region in the south of England which experienced a significant rise in the population of non-native English speakers following Eastern European member states’ accession to the EU in 2004 and 2007. The study focussed principally on the teachers’ responses to their newly arrived Polish children because Polish families were arriving in far greater numbers than those from other countries. The research aims focussed on exploring and analysing the pedagogical experiences of teachers managing the acquisition of English language for their Polish children. Critical engagement with their experiences and the ways in which they did or did not adapt their pedagogy for teaching English was channelled through Bourdieuian constructs of linguistic field, capital and habitus. The following sections explore my reasons for adopting Bourdieu’s work as a theoretical lens, the practicalities and challenges of incorporating Bourdieu’s tools for thinking in data analysis, and the subsequent impact on my research activity.
Resumo:
This article introduces a quantitative approach to e-commerce system evaluation based on the theory of process simulation. The general concept of e-commerce system simulation is presented based on the considerations of some limitations in e-commerce system development such as the huge amount of initial investments of time and money, and the long period from business planning to system development, then to system test and operation, and finally to exact return; in other words, currently used system analysis and development method cannot tell investors about some keen attentions such as how good their e-commerce system could be, how many investment repayments they could have, and which area they should improve regarding the initial business plan. In order to exam the value and its potential effects of an e-commerce business plan, it is necessary to use a quantitative evaluation approach and the authors of this article believe that process simulation is an appropriate option. The overall objective of this article is to apply the theory of process simulation to e-commerce system evaluation, and the authors will achieve this though an experimental study on a business plan for online construction and demolition waste exchange. The methodologies adopted in this article include literature review, system analysis and development, simulation modelling and analysis, and case study. The results from this article include the concept of e-commerce system simulation, a comprehensive review of simulation methods adopted in e-commerce system evaluation, and a real case study of applying simulation to e-commerce system evaluation. Furthermore, the authors hope that the adoption and implementation of the process simulation approach can effectively support business decision-making, and improve the efficiency of e-commerce systems.
Resumo:
In a world of almost permanent and rapidly increasing electronic data availability, techniques of filtering, compressing, and interpreting this data to transform it into valuable and easily comprehensible information is of utmost importance. One key topic in this area is the capability to deduce future system behavior from a given data input. This book brings together for the first time the complete theory of data-based neurofuzzy modelling and the linguistic attributes of fuzzy logic in a single cohesive mathematical framework. After introducing the basic theory of data-based modelling, new concepts including extended additive and multiplicative submodels are developed and their extensions to state estimation and data fusion are derived. All these algorithms are illustrated with benchmark and real-life examples to demonstrate their efficiency. Chris Harris and his group have carried out pioneering work which has tied together the fields of neural networks and linguistic rule-based algortihms. This book is aimed at researchers and scientists in time series modeling, empirical data modeling, knowledge discovery, data mining, and data fusion.
Resumo:
This article elucidates the Typological Primacy Model (TPM; Rothman, 2010, 2011, 2013) for the initial stages of adult third language (L3) morphosyntactic transfer, addressing questions that stem from the model and its application. The TPM maintains that structural proximity between the L3 and the L1 and/or the L2 determines L3 transfer. In addition to demonstrating empirical support for the TPM, this article articulates a proposal for how the mind unconsciously determines typological (structural) proximity based on linguistic cues from the L3 input stream used by the parser early on to determine holistic transfer of one previous (the L1 or the L2) system. This articulated version of the TPM is motivated by argumentation appealing to cognitive and linguistic factors. Finally, in line with the general tenets of the TPM, I ponder if and why L3 transfer might obtain differently depending on the type of bilingual (e.g. early vs. late) and proficiency level of bilingualism involved in the L3 process.
Resumo:
This article elucidates the Typological Primacy Model (TPM; Rothman, 2010, 2011, 2013) for the initial stages of adult third language (L3) morphosyntactic transfer, addressing questions that stem from the model and its application. The TPM maintains that structural proximity between the L3 and the L1 and/or the L2 determines L3 transfer. In addition to demonstrating empirical support for the TPM, this article articulates a proposal for how the mind unconsciously determines typological (structural) proximity based on linguistic cues from the L3 input stream used by the parser early on to determine holistic transfer of one previous (the L1 or the L2) system. This articulated version of the TPM is motivated by argumentation appealing to cognitive and linguistic factors. Finally, in line with the general tenets of the TPM, I ponder if and why L3 transfer might obtain differently depending on the type of bilingual (e.g. early vs. late) and proficiency level of bilingualism involved in the L3 process.
Resumo:
The performance of the atmospheric component of the new Hadley Centre Global Environmental Model (HadGEM1) is assessed in terms of its ability to represent a selection of key aspects of variability in the Tropics and extratropics. These include midlatitude storm tracks and blocking activity, synoptic variability over Europe, and the North Atlantic Oscillation together with tropical convection, the Madden-Julian oscillation, and the Asian summer monsoon. Comparisons with the previous model, the Third Hadley Centre Coupled Ocean-Atmosphere GCM (HadCM3), demonstrate that there has been a considerable increase in the transient eddy kinetic energy (EKE), bringing HadGEM1 into closer agreement with current reanalyses. This increase in EKE results from the increased horizontal resolution and, in combination with the improved physical parameterizations, leads to improvements in the representation of Northern Hemisphere storm tracks and blocking. The simulation of synoptic weather regimes over Europe is also greatly improved compared to HadCM3, again due to both increased resolution and other model developments. The variability of convection in the equatorial region is generally stronger and closer to observations than in HadCM3. There is, however, still limited convective variance coincident with several of the observed equatorial wave modes. Simulation of the Madden-Julian oscillation is improved in HadGEM1: both the activity and interannual variability are increased and the eastward propagation, although slower than observed, is much better simulated. While some aspects of the climatology of the Asian summer monsoon are improved in HadGEM1, the upper-level winds are too weak and the simulation of precipitation deteriorates. The dominant modes of monsoon interannual variability are similar in the two models, although in HadCM3 this is linked to SST forcing, while in HadGEM1 internal variability dominates. Overall, analysis of the phenomena considered here indicates that HadGEM1 performs well and, in many important respects, improves upon HadCM3. Together with the improved representation of the mean climate, this improvement in the simulation of atmospheric variability suggests that HadGEM1 provides a sound basis for future studies of climate and climate change.
Resumo:
Estimates of the response of crops to climate change rarely quantify the uncertainty inherent in the simulation of both climate and crops. We present a crop simulation ensemble for a location in India, perturbing the response of both crop and climate under both baseline (12 720 simulations) and doubled-CO2 (171720 simulations) climates. Some simulations used parameter values representing genotypic adaptation to mean temperature change. Firstly, observed and simulated yields in the baseline climate were compared. Secondly, the response of yield to changes in mean temperature was examined and compared to that found in the literature. No consistent response to temperature change was found across studies. Thirdly, the relative contribution of uncertainty in crop and climate simulation to the total uncertainty in projected yield changes was examined. In simulations without genotypic adaptation, most of the uncertainty came from the climate model parameters. Comparison with the simulations with genotypic adaptation and with a previous study suggested that the relatively low crop parameter uncertainty derives from the observational constraints on the crop parameters used in this study. Fourthly, the simulations were used, together with an observed dataset and a simple analysis of crop cardinal temperatures and thermal time, to estimate the potential for adaptation using existing cultivars. The results suggest that the germplasm for complete adaptation of groundnut cultivation in western India to a doubled-CO2 environment may not exist. In conjunction with analyses of germplasm and local management
Resumo:
The new dioxatetraazamacrocycle (L-1) was synthesized by a 2 + 2 condensation and characterized. Stability constants of its copper(II) complexes were determined by spectrophotometry in DMSO at 298.2 K in 0. 10 mol dm(-3) KClO4. Mainly dinuclear complexes are formed and the presence of mononuclear species is dependent on the counterion (Cl- or ClO4-). The association constants of the dinuclear copper(II) complexes with dicarboxylate anions [oxalate (oxa(2-)), malonate (mal(2-)), succinate (suc(2-)), and glutarate (glu(2-))] were also determined by spectrophotometry at 298.2 K in DMSO, and it was found that values decrease with an increase of the alkyl chain between the carboxylate groups. X-Band EPR spectra of the dicopper(II) complexes and of their cascade species in frozen DMSO exhibit dipole-dipole coupling, and their simulation, together with their UV-vis spectra, showed that the copper centres of the complexes in solution had square pyramidal geometries though with different distortions. From the experimental data, it was also possible to predict the Cu...Cu distances, the minimum being found at 6.4 angstrom for the (Cu2LCl4)-Cl-1 complex and then successively this distance slightly increases when the chloride anions are replaced by dicarboxylate anions, from 6.6 angstrom for oxa(2-) to 7.8 for glu(2-). The crystal structures of the dinuclear copper cascade species with oxa(2-) and suc(2-) were determined and showed one anion bridging both copper centres and Cu...Cu distances of 5.485(7) angstrom and 6.442(8) angstrom, respectively.
Resumo:
Analysis of X-ray powder data for the melt-crystallisable aromatic poly(thioether thioether ketone) [-S-Ar-S-Ar-CO-Ar](n), ('PTTK', Ar= 1,4-phenylene), reveals that it adopts a crystal structure very different from that established for its ether-analogue PEEK. Molecular modelling and diffraction-simulation studies of PTTK show that the structure of this polymer is analogous to that of melt-crystallised poly(thioetherketone) [-SAr-CO-Ar](n) in which the carbonyl linkages in symmetry-related chains are aligned anti-parallel to one another. and that these bridging units are crystallographically interchangeable. The final model for the crystal structure of PTTK is thus disordered, in the monoclinic space group 121a (two chains per unit cell), with cell dimensions a = 7.83, b = 6.06, c = 10.35 angstrom, beta = 93.47 degrees. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
This is the first of two articles presenting a detailed review of the historical evolution of mathematical models applied in the development of building technology, including conventional buildings and intelligent buildings. After presenting the technical differences between conventional and intelligent buildings, this article reviews the existing mathematical models, the abstract levels of these models, and their links to the literature for intelligent buildings. The advantages and limitations of the applied mathematical models are identified and the models are classified in terms of their application range and goal. We then describe how the early mathematical models, mainly physical models applied to conventional buildings, have faced new challenges for the design and management of intelligent buildings and led to the use of models which offer more flexibility to better cope with various uncertainties. In contrast with the early modelling techniques, model approaches adopted in neural networks, expert systems, fuzzy logic and genetic models provide a promising method to accommodate these complications as intelligent buildings now need integrated technologies which involve solving complex, multi-objective and integrated decision problems.
Resumo:
This article is the second part of a review of the historical evolution of mathematical models applied in the development of building technology. The first part described the current state of the art and contrasted various models with regard to the applications to conventional buildings and intelligent buildings. It concluded that mathematical techniques adopted in neural networks, expert systems, fuzzy logic and genetic models, that can be used to address model uncertainty, are well suited for modelling intelligent buildings. Despite the progress, the possible future development of intelligent buildings based on the current trends implies some potential limitations of these models. This paper attempts to uncover the fundamental limitations inherent in these models and provides some insights into future modelling directions, with special focus on the techniques of semiotics and chaos. Finally, by demonstrating an example of an intelligent building system with the mathematical models that have been developed for such a system, this review addresses the influences of mathematical models as a potential aid in developing intelligent buildings and perhaps even more advanced buildings for the future.
Resumo:
This paper formally derives a new path-based neural branch prediction algorithm (FPP) into blocks of size two for a lower hardware solution while maintaining similar input-output characteristic to the algorithm. The blocked solution, here referred to as B2P algorithm, is obtained using graph theory and retiming methods. Verification approaches were exercised to show that prediction performances obtained from the FPP and B2P algorithms differ within one mis-prediction per thousand instructions using a known framework for branch prediction evaluation. For a chosen FPGA device, circuits generated from the B2P algorithm showed average area savings of over 25% against circuits for the FPP algorithm with similar time performances thus making the proposed blocked predictor superior from a practical viewpoint.
Resumo:
We describe a compositional framework, together with its supporting toolset, for hardware/software co-design. Our framework is an integration of a formal approach within a traditional design flow. The formal approach is based on Interval Temporal Logic and its executable subset, Tempura. Refinement is the key element in our framework because it will derive from a single formal specification of the system the software and hardware parts of the implementation, while preserving all properties of the system specification. During refinement simulation is used to choose the appropriate refinement rules, which are applied automatically in the HOL system. The framework is illustrated with two case studies. The work presented is part of a UK collaborative research project between the Software Technology Research Laboratory at the De Montfort University and the Oxford University Computing Laboratory.