791 resultados para Computing methodologies


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analogue computers provide actual rather than virtual representations of model systems. They are powerful and engaging computing machines that are cheap and simple to build. This two-part Retronics article helps you build (and understand!) your own analogue computer to simulate the Lorenz butterfly that's become iconic for Chaos theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Markowitz showed that assets can be combined to produce an 'Efficient' portfolio that will give the highest level of portfolio return for any level of portfolio risk, as measured by the variance or standard deviation. These portfolios can then be connected to generate what is termed an 'Efficient Frontier' (EF). In this paper we discuss the calculation of the Efficient Frontier for combinations of assets, again using the spreadsheet Optimiser. To illustrate the derivation of the Efficient Frontier, we use the data from the Investment Property Databank Long Term Index of Investment Returns for the period 1971 to 1993. Many investors might require a certain specific level of holding or a restriction on holdings in at least some of the assets. Such additional constraints may be readily incorporated into the model to generate a constrained EF with upper and/or lower bounds. This can then be compared with the unconstrained EF to see whether the reduction in return is acceptable. To see the effect that these additional constraints may have, we adopt a fairly typical pension fund profile, with no more than 20% of the total held in Property. The paper shows that it is now relatively easy to use the Optimiser available in at least one spreadsheet (EXCEL) to calculate efficient portfolios for various levels of risk and return, both constrained and unconstrained, so as to be able to generate any number of Efficient Frontiers.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Design summer years representing near-extreme hot summers have been used in the United Kingdom for the evaluation of thermal comfort and overheating risk. The years have been selected from measured weather data basically representative of an assumed stationary climate. Recent developments have made available ‘morphed’ equivalents of these years by shifting and stretching the measured variables using change factors produced by the UKCIP02 climate projections. The release of the latest, probabilistic, climate projections of UKCP09 together with the availability of a weather generator that can produce plausible daily or hourly sequences of weather variables has opened up the opportunity for generating new design summer years which can be used in risk-based decision-making. There are many possible methods for the production of design summer years from UKCP09 output: in this article, the original concept of the design summer year is largely retained, but a number of alternative methodologies for generating the years are explored. An alternative, more robust measure of warmth (weighted cooling degree hours) is also employed. It is demonstrated that the UKCP09 weather generator is capable of producing years for the baseline period, which are comparable with those in current use. Four methodologies for the generation of future years are described, and their output related to the future (deterministic) years that are currently available. It is concluded that, in general, years produced from the UKCP09 projections are warmer than those generated previously. Practical applications: The methodologies described in this article will facilitate designers who have access to the output of the UKCP09 weather generator (WG) to generate Design Summer Year hourly files tailored to their needs. The files produced will differ according to the methodology selected, in addition to location, emissions scenario and timeslice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pocket Data Mining (PDM) is our new term describing collaborative mining of streaming data in mobile and distributed computing environments. With sheer amounts of data streams are now available for subscription on our smart mobile phones, the potential of using this data for decision making using data stream mining techniques has now been achievable owing to the increasing power of these handheld devices. Wireless communication among these devices using Bluetooth and WiFi technologies has opened the door wide for collaborative mining among the mobile devices within the same range that are running data mining techniques targeting the same application. This paper proposes a new architecture that we have prototyped for realizing the significant applications in this area. We have proposed using mobile software agents in this application for several reasons. Most importantly the autonomic intelligent behaviour of the agent technology has been the driving force for using it in this application. Other efficiency reasons are discussed in details in this paper. Experimental results showing the feasibility of the proposed architecture are presented and discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data and a data warehouse. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular we look at two aspects, first how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories --- this is an important and challenging aspect of P-found because the data volumes involved are too large to be centralised. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling new scientific discoveries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed and collaborative data stream mining in a mobile computing environment is referred to as Pocket Data Mining PDM. Large amounts of available data streams to which smart phones can subscribe to or sense, coupled with the increasing computational power of handheld devices motivates the development of PDM as a decision making system. This emerging area of study has shown to be feasible in an earlier study using technological enablers of mobile software agents and stream mining techniques [1]. A typical PDM process would start by having mobile agents roam the network to discover relevant data streams and resources. Then other (mobile) agents encapsulating stream mining techniques visit the relevant nodes in the network in order to build evolving data mining models. Finally, a third type of mobile agents roam the network consulting the mining agents for a final collaborative decision, when required by one or more users. In this paper, we propose the use of distributed Hoeffding trees and Naive Bayes classifers in the PDM framework over vertically partitioned data streams. Mobile policing, health monitoring and stock market analysis are among the possible applications of PDM. An extensive experimental study is reported showing the effectiveness of the collaborative data mining with the two classifers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Real estate depreciation continues to be a critical issue for investors and the appraisal profession in the UK in the 1990s. Depreciation-sensitive cash flow models have been developed, but there is a real need to develop further empirical methodologies to determine rental depreciation rates for input into these models. Although building quality has been found to be an important explanatory variable in depreciation it is very difficult to incorporate it into such models or to analyse it retrospectively. It is essential to examine previous depreciation research from real estate and economics in the USA and UK to understand the issues in constructing a valid and pragmatic way of calculating rental depreciation. Distinguishing between 'depreciation' and 'obsolescence' is important, and the pattern of depreciation in any study can be influenced by such factors as the type (longitudinal or crosssectional) and timing of the study, and the market state. Longitudinal studies can analyse change more directly than cross-sectional studies. Any methodology for calculating rental depreciation rate should be formulated in the context of such issues as 'censored sample bias', 'lemons' and 'filtering', which have been highlighted in key US literature from the field of economic depreciation. Property depreciation studies in the UK have tended to overlook this literature, however. Although data limitations and constraints reduce the ability of empirical property depreciation work in the UK to consider these issues fully, 'averaging' techniques and ordinary least squares (OLS) regression can both provide a consistent way of calculating rental depreciation rates within a 'cohort' framework.