36 resultados para methodology of indexation
Resumo:
Risk and uncertainty are, to say the least, poorly considered by most individuals involved in real estate analysis - in both development and investment appraisal. Surveyors continue to express 'uncertainty' about the value (risk) of using relatively objective methods of analysis to account for these factors. These methods attempt to identify the risk elements more explicitly. Conventionally this is done by deriving probability distributions for the uncontrolled variables in the system. A suggested 'new' way of "being able to express our uncertainty or slight vagueness about some of the qualitative judgements and not entirely certain data required in the course of the problem..." uses the application of fuzzy logic. This paper discusses and demonstrates the terminology and methodology of fuzzy analysis. In particular it attempts a comparison of the procedures with those used in 'conventional' risk analysis approaches and critically investigates whether a fuzzy approach offers an alternative to the use of probability based analysis for dealing with aspects of risk and uncertainty in real estate analysis
Resumo:
A set of random variables is exchangeable if its joint distribution function is invariant under permutation of the arguments. The concept of exchangeability is discussed, with a view towards potential application in evaluating ensemble forecasts. It is argued that the paradigm of ensembles being an independent draw from an underlying distribution function is probably too narrow; allowing ensemble members to be merely exchangeable might be a more versatile model. The question is discussed whether established methods of ensemble evaluation need alteration under this model, with reliability being given particular attention. It turns out that the standard methodology of rank histograms can still be applied. As a first application of the exchangeability concept, it is shown that the method of minimum spanning trees to evaluate the reliability of high dimensional ensembles is mathematically sound.
Resumo:
The development of effective environmental management plans and policies requires a sound understanding of the driving forces involved in shaping and altering the structure and function of ecosystems. However, driving forces, especially anthropogenic ones, are defined and operate at multiple administrative levels, which do not always match ecological scales. This paper presents an innovative methodology of analysing drivers of change by developing a typology of scale sensitivity of drivers that classifies and describes the way they operate across multiple administrative levels. Scale sensitivity varies considerably among drivers, which can be classified into five broad categories depending on the response of ‘evenness’ and ‘intensity change’ when moving across administrative levels. Indirect drivers tend to show low scale sensitivity, whereas direct drivers show high scale sensitivity, as they operate in a non-linear way across the administrative scale. Thus policies addressing direct drivers of change, in particular, need to take scale into consideration during their formulation. Moreover, such policies must have a strong spatial focus, which can be achieved either by encouraging local–regional policy making or by introducing high flexibility in (inter)national policies to accommodate increased differentiation at lower administrative levels. High quality data is available for several drivers, however, the availability of consistent data at all levels for non-anthropogenic drivers is a major constraint to mapping and assessing their scale sensitivity. This lack of data may hinder effective policy making for environmental management, since it restricts the ability to fully account for scale sensitivity of natural drivers in policy design.
Resumo:
The development of effective environmental management plans and policies requires a sound understanding of the driving forces involved in shaping and altering the structure and function of ecosystems. However, driving forces, especially anthropogenic ones, are defined and operate at multiple administrative levels, which do not always match ecological scales. This paper presents an innovative methodology of analysing drivers of change by developing a typology of scale sensitivity of drivers that classifies and describes the way they operate across multiple administrative levels. Scale sensitivity varies considerably among drivers, which can be classified into five broad categories depending on the response of ‘evenness’ and ‘intensity change’ when moving across administrative levels. Indirect drivers tend to show low scale sensitivity, whereas direct drivers show high scale sensitivity, as they operate in a non-linear way across the administrative scale. Thus policies addressing direct drivers of change, in particular, need to take scale into consideration during their formulation. Moreover, such policies must have a strong spatial focus, which can be achieved either by encouraging local–regional policy making or by introducing high flexibility in (inter)national policies to accommodate increased differentiation at lower administrative levels. High quality data is available for several drivers, however, the availability of consistent data at all levels for non-anthropogenic drivers is a major constraint to mapping and assessing their scale sensitivity. This lack of data may hinder effective policy making for environmental management, since it restricts the ability to fully account for scale sensitivity of natural drivers in policy design.
Resumo:
The article examines the customary international law credentials of the humanitarian law rules proposed by the International Committee of the Red Cross (ICR) in 2005. It relies on the BIICL/Chatham House analysis as a ‘constructive comment’ on the methodology of the ICRC study and the rules formed as a result of that methodology with respect to the dead and missing as an aid to determination of their customary law status. It shows that most of the rules studied have a customary international lawpedigree which conforms to the conclusions formed on the rules generally in the Wilmshurst and Breau study. However, the rules with respect to return of personal effects, recording location of graves and notification of relatives of access to gravesites do not seem to have even on a majoritarian/deductive approach enough volume of state practice to establish them as customary with respect to civilians.
Resumo:
As satellite technology develops, satellite rainfall estimates are likely to become ever more important in the world of food security. It is therefore vital to be able to identify the uncertainty of such estimates and for end users to be able to use this information in a meaningful way. This paper presents new developments in the methodology of simulating satellite rainfall ensembles from thermal infrared satellite data. Although the basic sequential simulation methodology has been developed in previous studies, it was not suitable for use in regions with more complex terrain and limited calibration data. Developments in this work include the creation of a multithreshold, multizone calibration procedure, plus investigations into the causes of an overestimation of low rainfall amounts and the best way to take into account clustered calibration data. A case study of the Ethiopian highlands has been used as an illustration.
Resumo:
This paper analyses changes in corporate social responsibility (CSR) reporting practices among Saudi listed companies in the past three years. Using content analysis methodology of annual reports, a sample of 174 annual reports representing 58 Saudi listed companies from different sectors were analysed to investigate the extent of the level of CSR disclosure in the years 2010 to 2012. Our paper focuses on trends of CSR information in the four categories: Environment; Employee; Community and Customer. In developing countries, the CSR disclosure studies are limited and in the case of Saudi Arabia. Overall a significant increase in CSR reporting was observed over that period despite the fact that private-sector companies are still in the early stages of awareness as far as integrating CSR activities into their corporate policies and strategies is concerned.
Resumo:
Do philosophers use intuitions? Should philosophers use intuitions? Can philosophical methods (where intuitions are concerned) be improved upon? In order to answer these questions we need to have some idea of how we should go about answering them. I defend a way of going about methodology of intuitions: a metamethodology. I claim the following: (i) we should approach methodological questions about intuitions with a thin conception of intuitions in mind; (ii) we should carve intuitions finely; and, (iii) we should carve to a grain to which we are sensitive in our everyday philosophising. The reason is that, unless we do so, we don’t get what we want from philosophical methodology. I argue that what we want is information that will aid us in formulating practical advice concerning how to do philosophy responsibly/well/better.
Resumo:
Most studies concerned with the representations of local people in tourism discourse point to the prevalence of stereotypic images asserting that contemporary tourism perpetuates colonial legacy and gendered discursive practices. This claim has been, to some extent, contested in research that explores representations of hosts in local tourism materials claiming that tourism can also discursively resist the dominant Western imagery. While the evidence for the existence of hegemonic and diverging discourses about the local ‘Other’ seems compelling, the empirical basis of this research is rather small and often limited to one geographic context. The present study addresses these shortcomings by examining representations of hosts in a larger corpus of promotional tourism materials including texts produced by Western and local tourism industries. The data is investigated using the methodology of Corpus-Assisted Discourse Studies (CADS). By comparing external with internal (self) representations, this study verifies and refines some of the claims on the subject and offers a much more nuanced picture of representations that defies the black and white scenarios proposed in previous research
Resumo:
The level of agreement between climate model simulations and observed surface temperature change is a topic of scientific and policy concern. While the Earth system continues to accumulate energy due to anthropogenic and other radiative forcings, estimates of recent surface temperature evolution fall at the lower end of climate model projections. Global mean temperatures from climate model simulations are typically calculated using surface air temperatures, while the corresponding observations are based on a blend of air and sea surface temperatures. This work quantifies a systematic bias in model-observation comparisons arising from differential warming rates between sea surface temperatures and surface air temperatures over oceans. A further bias arises from the treatment of temperatures in regions where the sea ice boundary has changed. Applying the methodology of the HadCRUT4 record to climate model temperature fields accounts for 38% of the discrepancy in trend between models and observations over the period 1975–2014.
Resumo:
Modern methods of spawning new technological motifs are not appropriate when it is desired to realize artificial life as an actual real world entity unto itself (Pattee 1995; Brooks 2006; Chalmers 1995). Many fundamental aspects of such a machine are absent in common methods, which generally lack methodologies of construction. In this paper we mix classical and modern studies in order to attempt to realize an artificial life form from first principles. A model of an algorithm is introduced, its methodology of construction is presented, and the fundamental source from which it sprang is discussed.
Resumo:
In developing Isotype, Otto Neurath and his colleagues were the first to systematically explore a consistent visual language as part of an encyclopedic approach to representing all aspects of the physical world. The pictograms used in Isotype have a secure legacy in today's public information symbols, but Isotype was more than this: it was designed to communicate social facts memorably to less educated groups, including schoolchildren and workers, reflecting its initial testing ground in the socialist municipality of Vienna during the 1920s. The social engagement and methodology of Isotype are examined here in order to draw some lessons for information design today.
Resumo:
A novel partitioned least squares (PLS) algorithm is presented, in which estimates from several simple system models are combined by means of a Bayesian methodology of pooling partial knowledge. The method has the added advantage that, when the simple models are of a similar structure, it lends itself directly to parallel processing procedures, thereby speeding up the entire parameter estimation process by several factors.
Resumo:
Impact Assessments (IAs) were introduced at the EU level under the rhetorical facade of ‘better regulation’. The actual aim was to improve not only the quality but also the reputation of EU regulation before stakeholders. However, evidence brought forward by a number of evaluations pointed out that IAs are yet to achieve acceptable quality standards. The paper offers an overview of different disciplinary approaches for looking at IAs. It suggests that risk regulation encompasses the theoretical foundations to help understand the role of IAs in the EU decisionmaking process. The analysis of 60 early days preliminary IAs provides empirical evidence regarding policy alternatives, methodology of consultation and use of quantitative techniques. Findings suggest that dawn period IAs were used mainly to provide some empirical evidence for regulatory intervention in front of stakeholders. The paper concludes with assumptions about the future role of IAs at EU level.
Resumo:
How do changing notions of children’s reading practices alter or even create classic texts? This article looks at how the nineteenth-century author Jules Verne (1828-1905) was modernised by Hachette for their Bibliothèque Verte children’s collection in the 1950s and 60s. Using the methodology of adaptation studies, the article reads the abridged texts in the context of the concerns that emerged in postwar France about what children were reading. It examines how these concerns shaped editorial policy, and the transformations that Verne’s texts underwent before they were considered suitable for the children of the baby-boom generation. It asks whether these adapted versions damaged Verne’s reputation, as many literary scholars have suggested, or if the process of dividing his readership into children and adults actually helped to reinforce the new idea of his texts as complex and multilayered. In so doing, this article provides new insights into the impact of postwar reforms on children’s publishing and explores the complex interplay between abridgment, censorship, children’s literature and the adult canon.