998 resultados para Extensive reading
Resumo:
A large number of urban surface energy balance models now exist with different assumptions about the important features of the surface and exchange processes that need to be incorporated. To date, no com- parison of these models has been conducted; in contrast, models for natural surfaces have been compared extensively as part of the Project for Intercomparison of Land-surface Parameterization Schemes. Here, the methods and first results from an extensive international comparison of 33 models are presented. The aim of the comparison overall is to understand the complexity required to model energy and water exchanges in urban areas. The degree of complexity included in the models is outlined and impacts on model performance are discussed. During the comparison there have been significant developments in the models with resulting improvements in performance (root-mean-square error falling by up to two-thirds). Evaluation is based on a dataset containing net all-wave radiation, sensible heat, and latent heat flux observations for an industrial area in Vancouver, British Columbia, Canada. The aim of the comparison is twofold: to identify those modeling ap- proaches that minimize the errors in the simulated fluxes of the urban energy balance and to determine the degree of model complexity required for accurate simulations. There is evidence that some classes of models perform better for individual fluxes but no model performs best or worst for all fluxes. In general, the simpler models perform as well as the more complex models based on all statistical measures. Generally the schemes have best overall capability to model net all-wave radiation and least capability to model latent heat flux.
Resumo:
K-Means is a popular clustering algorithm which adopts an iterative refinement procedure to determine data partitions and to compute their associated centres of mass, called centroids. The straightforward implementation of the algorithm is often referred to as `brute force' since it computes a proximity measure from each data point to each centroid at every iteration of the K-Means process. Efficient implementations of the K-Means algorithm have been predominantly based on multi-dimensional binary search trees (KD-Trees). A combination of an efficient data structure and geometrical constraints allow to reduce the number of distance computations required at each iteration. In this work we present a general space partitioning approach for improving the efficiency and the scalability of the K-Means algorithm. We propose to adopt approximate hierarchical clustering methods to generate binary space partitioning trees in contrast to KD-Trees. In the experimental analysis, we have tested the performance of the proposed Binary Space Partitioning K-Means (BSP-KM) when a divisive clustering algorithm is used. We have carried out extensive experimental tests to compare the proposed approach to the one based on KD-Trees (KD-KM) in a wide range of the parameters space. BSP-KM is more scalable than KDKM, while keeping the deterministic nature of the `brute force' algorithm. In particular, the proposed space partitioning approach has shown to overcome the well-known limitation of KD-Trees in high-dimensional spaces and can also be adopted to improve the efficiency of other algorithms in which KD-Trees have been used.
Resumo:
Dense deployments of wireless local area networks (WLANs) are becoming a norm in many cities around the world. However, increased interference and traffic demands can severely limit the aggregate throughput achievable unless an effective channel assignment scheme is used. In this work, a simple and effective distributed channel assignment (DCA) scheme is proposed. It is shown that in order to maximise throughput, each access point (AP) simply chooses the channel with the minimum number of active neighbour nodes (i.e. nodes associated with neighbouring APs that have packets to send). However, application of such a scheme to practice depends critically on its ability to estimate the number of neighbour nodes in each channel, for which no practical estimator has been proposed before. In view of this, an extended Kalman filter (EKF) estimator and an estimate of the number of nodes by AP are proposed. These not only provide fast and accurate estimates but can also exploit channel switching information of neighbouring APs. Extensive packet level simulation results show that the proposed minimum neighbour and EKF estimator (MINEK) scheme is highly scalable and can provide significant throughput improvement over other channel assignment schemes.
Resumo:
We present extensive molecular dynamics simulations of the dynamics of diluted long probe chains entangled with a matrix of shorter chains. The chain lengths of both components are above the entanglement strand length, and the ratio of their lengths is varied over a wide range to cover the crossover from the chain reptation regime to tube Rouse motion regime of the long probe chains. Reducing the matrix chain length results in a faster decay of the dynamic structure factor of the probe chains, in good agreement with recent neutron spin echo experiments. The diffusion of the long chains, measured by the mean square displacements of the monomers and the centers of mass of the chains, demonstrates a systematic speed-up relative to the pure reptation behavior expected for monodisperse melts of sufficiently long polymers. On the other hand, the diffusion of the matrix chains is only weakly perturbed by the diluted long probe chains. The simulation results are qualitatively consistent with the theoretical predictions based on constraint release Rouse model, but a detailed comparison reveals the existence of a broad distribution of the disentanglement rates, which is partly confirmed by an analysis of the packing and diffusion of the matrix chains in the tube region of the probe chains. A coarse-grained simulation model based on the tube Rouse motion model with incorporation of the probability distribution of the tube segment jump rates is developed and shows results qualitatively consistent with the fine scale molecular dynamics simulations. However, we observe a breakdown in the tube Rouse model when the short chain length is decreased to around N-S = 80, which is roughly 3.5 times the entanglement spacing N-e(P) = 23. The location of this transition may be sensitive to the chain bending potential used in our simulations.
Resumo:
In Constructing Melchior Lorichs's Panorama of Constantinople, Nigel Westbrook, Kenneth Rainsbury Dark, and Rene Van Meeuwen propose that Melchior Lorichs's 1559 Panorama of Constantinople was created by using a viewing grid. The panorama is thus a reliable graphic source for the lost or since-altered Ottoman and Byzantine buildings of the city. The panorama appears to lie outside the conventional symbolic mode of topographical depiction common for its period and constitutes a rare "scientific" record of an encounter of a perspicacious observer with a vast subject. The drawing combines elements of allegory with extensive empirical observation. Several unknown structures, shown on the drawing, have been located in relation to the present-day topography of Istanbul, as a test-case for further research.
Resumo:
This essay explores how The Truman Show, Peter Weir’s film about a television show, deserves more sustained analysis than it has received since its release in 1998. I will argue that The Truman Show problematizes the binary oppositions of cinema/television, disruption/stability, reality/simulation and outside/inside that structure it. The Truman Show proposes that binary oppositions such as outside/inside exist in a mutually implicating relationship. This deconstructionist strategy not only questions the film’s critical position, but also enables a reflection on the very status of film analysis itself.
Resumo:
A multi-layered architecture of self-organizing neural networks is being developed as part of an intelligent alarm processor to analyse a stream of power grid fault messages and provide a suggested diagnosis of the fault location. Feedback concerning the accuracy of the diagnosis is provided by an object-oriented grid simulator which acts as an external supervisor to the learning system. The utilization of artificial neural networks within this environment should result in a powerful generic alarm processor which will not require extensive training by a human expert to produce accurate results.
Resumo:
In the emerging digital economy, the management of information in aerospace and construction organisations is facing a particular challenge due to the ever-increasing volume of information and the extensive use of information and communication technologies (ICTs). This paper addresses the problems of information overload and the value of information in both industries by providing some cross-disciplinary insights. In particular it identifies major issues and challenges in the current information evaluation practice in these two industries. Interviews were conducted to get a spectrum of industrial perspectives (director/strategic, project management and ICT/document management) on these issues in particular to information storage and retrieval strategies and the contrasting approaches to knowledge and information management of personalisation and codification. Industry feedback was collected by a follow-up workshop to strengthen the findings of the research. An information-handling agenda is outlined for the development of a future Information Evaluation Methodology (IEM) which could facilitate the practice of the codification of high-value information in order to support through-life knowledge and information management (K&IM) practice.
Resumo:
Typeface design: collaborative work commissioned by Adobe Inc. Published but unreleased. The Adobe Devanagari typefaces were commissioned from Tiro Typeworks and collaboratively designed by Tim Holloway, Fiona Ross and John Hudson, beginning in 2005. The types were officially released in 2009. The design brief was to produce a typeface for modern business communications in Hindi and other languages, to be legible both in print and on screen. Adobe Devanagari was designed to be highly readable in a range of situations including quite small sizes in spreadsheets and in continuous text setting, as well as at display sizes, where the full character of the typeface reveals itself. The construction of the letters is based on traditional penmanship but possesses less stroke contrast than many Devanagari types, in order to maintain strong, legible forms at smaller sizes. To achieve a dynamic, fluid style the design features a rounded treatment of distinguishing terminals and stroke reversals, open counters that also aid legibility at smaller sizes, and delicately flaring strokes. Together, these details reveal an original hand and provide a contemporary approach that is clean, clear and comfortable to read whether in short or long passages of text. This new approach to a traditional script is intended to counter the dominance of rigid, staccato-like effects of straight verticals and horizontals in earlier types and many existing fonts. OpenType Layout features in the fonts provide both automated and discretionary access to an extensive glyph set, enabling sophisticated typography. Many conjuncts preferred in classical literary texts and particularly in some North Indian languages are included; these literary conjuncts may be substituted by specially designed alternative linear forms and fitted half forms. The length of the ikars—ि and ी—varies automatically according to adjacent letter or conjunct width. Regional variants of characters and numerals (e.g. Marathi forms) are included as alternates. Careful attention has been given to the placements of all vowel signs and modifiers. The fonts include both proportional and tabular numerals in Indian and European styles. Extensive kerning covers several thousand possible combinations of half forms and full forms to anticipate arbitrary conjuncts in foreign loan words. _____