912 resultados para Output data
Resumo:
Several alternative approaches have been discussed: Levenberg-Marquardt - no satisfactory convergence speed + local minimum, Bacterial algorithm - problems with large dimensionality (speed), Clustering - no safe criterion for number of clusters + dimentionality problem.
Resumo:
The Benguela Current, located off the west coast of southern Africa, is tied to a highly productive upwelling system**1. Over the past 12 million years, the current has cooled, and upwelling has intensified**2, 3, 4. These changes have been variously linked to atmospheric and oceanic changes associated with the glaciation of Antarctica and global cooling**5, the closure of the Central American Seaway**1, 6 or the further restriction of the Indonesian Seaway**3. The upwelling intensification also occurred during a period of substantial uplift of the African continent**7, 8. Here we use a coupled ocean-atmosphere general circulation model to test the effect of African uplift on Benguela upwelling. In our simulations, uplift in the East African Rift system and in southern and southwestern Africa induces an intensification of coastal low-level winds, which leads to increased oceanic upwelling of cool subsurface waters. We compare the effect of African uplift with the simulated impact of the Central American Seaway closure9, Indonesian Throughflow restriction10 and Antarctic glaciation**11, and find that African uplift has at least an equally strong influence as each of the three other factors. We therefore conclude that African uplift was an important factor in driving the cooling and strengthening of the Benguela Current and coastal upwelling during the late Miocene and Pliocene epochs.
Resumo:
Vita.
Resumo:
Hominid evolution in the late Miocene has long been hypothesized to be linked to the retreat of the tropical rainforest in Africa. One cause for the climatic and vegetation change often considered was uplift of Africa, but also uplift of the Himalaya and the Tibetan Plateau was suggested to have impacted rainfall distribution over Africa. Recent proxy data suggest that in East Africa open grassland habitats were available to the common ancestors of hominins and apes long before their divergence and do not find evidence for a closed rainforest in the late Miocene. We used the coupled global general circulation model CCSM3 including an interactively coupled dynamic vegetation module to investigate the impact of topography on African hydro-climate and vegetation. We performed sensitivity experiments altering elevations of the Himalaya and the Tibetan Plateau as well as of East and Southern Africa. The simulations confirm the dominant impact of African topography for climate and vegetation development of the African tropics. Only a weak influence of prescribed Asian uplift on African climate could be detected. The model simulations show that rainforest coverage of Central Africa is strongly determined by the presence of elevated African topography. In East Africa, despite wetter conditions with lowered African topography, the conditions were not favorable enough to maintain a closed rainforest. A discussion of the results with respect to other model studies indicates a minor importance of vegetation-atmosphere or ocean-atmosphere feedbacks and a large dependence of the simulated vegetation response on the land surface/vegetation model.
Resumo:
In recent years, the analysis of trade in value added has been explored by many researchers. Although they have made important contributions by developing GVC-related indices and proposing techniques for decomposing trade data, they have not yet explored the method of value chain mapping—a core element of conventional value chain analysis. This paper introduces a method of value chain mapping that uses international input-output data and reveals both upstream and downstream transactions of goods and services induced by production activities of a specific commodity or industry. This method is subsequently applied to the agricultural value chain of three Greater Mekong Sub-region countries (i.e., Thailand, Vietnam, and Cambodia). The results show that the agricultural value chain has been increasingly internationalized, although there is still room for obtaining benefits from GVC participation, especially in a country such as Cambodia.
Resumo:
Because of the importance and potential usefulness of construction market statistics to firms and government, consistency between different sources of data is examined with a view to building a predictive model of construction output using construction data alone. However, a comparison of Department of Trade and Industry (DTI) and Office for National Statistics (ONS) series shows that the correlation coefcient (used as a measure of consistency) of the DTI output and DTI orders data and the correlation coefficient of the DTI output and ONS output data are low. It is not possible to derive a predictive model of DTI output based on DTI orders data alone. The question arises whether or not an alternative independent source of data may be used to predict DTI output data. Independent data produced by Emap Glenigan (EG), based on planning applications, potentially offers such a source of information. The EG data records the value of planning applications and their planned start and finish dates. However, as this data is ex ante and is not correlated with DTI output it is not possible to use this data to describe the volume of actual construction output. Nor is it possible to use the EG planning data to predict DTI construc-tion orders data. Further consideration of the issues raised reveal that it is not practically possible to develop a consistent predictive model of construction output using construction statistics gathered at different stages in the development process.
Resumo:
Meteorological (met) station data is used as the basis for a number of influential studies into the impacts of the variability of renewable resources. Real turbine output data is not often easy to acquire, whereas meteorological wind data, supplied at a standardised height of 10 m, is widely available. This data can be extrapolated to a standard turbine height using the wind profile power law and used to simulate the hypothetical power output of a turbine. Utilising a number of met sites in such a manner can develop a model of future wind generation output. However, the accuracy of this extrapolation is strongly dependent on the choice of the wind shear exponent alpha. This paper investigates the accuracy of the simulated generation output compared to reality using a wind farm in North Rhins, Scotland and a nearby met station in West Freugh. The results show that while a single annual average value for alpha may be selected to accurately represent the long term energy generation from a simulated wind farm, there are significant differences between simulation and reality on an hourly power generation basis, with implications for understanding the impact of variability of renewables on short timescales, particularly system balancing and the way that conventional generation may be asked to respond to a high level of variable renewable generation on the grid in the future.
Resumo:
The use of Mahalanobis squared distance–based novelty detection in statistical damage identification has become increasingly popular in recent years. The merit of the Mahalanobis squared distance–based method is that it is simple and requires low computational effort to enable the use of a higher dimensional damage-sensitive feature, which is generally more sensitive to structural changes. Mahalanobis squared distance–based damage identification is also believed to be one of the most suitable methods for modern sensing systems such as wireless sensors. Although possessing such advantages, this method is rather strict with the input requirement as it assumes the training data to be multivariate normal, which is not always available particularly at an early monitoring stage. As a consequence, it may result in an ill-conditioned training model with erroneous novelty detection and damage identification outcomes. To date, there appears to be no study on how to systematically cope with such practical issues especially in the context of a statistical damage identification problem. To address this need, this article proposes a controlled data generation scheme, which is based upon the Monte Carlo simulation methodology with the addition of several controlling and evaluation tools to assess the condition of output data. By evaluating the convergence of the data condition indices, the proposed scheme is able to determine the optimal setups for the data generation process and subsequently avoid unnecessarily excessive data. The efficacy of this scheme is demonstrated via applications to a benchmark structure data in the field.
Resumo:
Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.
Resumo:
We highlight an example of considerable bias in officially published input-output data (factor-income shares) by an LDC (Turkey), which many researchers use without question. We make use of an intertemporal general equilibrium model of trade and production to evaluate the dynamic gains for Turkey from currently debated trade policy options and compare the predictions using conservatively adjusted, rather than official, data on factor shares.
Resumo:
Measures have been developed to understand tendencies in the distribution of economic activity. The merits of these measures are in the convenience of data collection and processing. In this interim report, investigating the property of such measures to determine the geographical spread of economic activities, we summarize the merits and limitations of measures, and make clear that we must apply caution in their usage. As a first trial to access areal data, this project focus on administrative areas, not on point data and input-output data. Firm level data is not within the scope of this article. The rest of this article is organized as follows. In Section 2, we touch on the the limitations and problems associated with the measures and areal data. Specific measures are introduced in Section 3, and applied in Section 4. The conclusion summarizes the findings and discusses future work.
Resumo:
A novel algorithm based on bimatrix game theory has been developed to improve the accuracy and reliability of a speaker diarization system. This algorithm fuses the output data of two open-source speaker diarization programs, LIUM and SHoUT, taking advantage of the best properties of each one. The performance of this new system has been tested by means of audio streams from several movies. From preliminary results on fragments of five movies, improvements of 63% in false alarms and missed speech mistakes have been achieved with respect to LIUM and SHoUT systems working alone. Moreover, we also improve in a 20% the number of recognized speakers, getting close to the real number of speakers in the audio stream
Resumo:
The main advantage of Data Envelopment Analysis (DEA) is that it does not require any priori weights for inputs and outputs and allows individual DMUs to evaluate their efficiencies with the input and output weights that are only most favorable weights for calculating their efficiency. It can be argued that if DMUs are experiencing similar circumstances, then the pricing of inputs and outputs should apply uniformly across all DMUs. That is using of different weights for DMUs makes their efficiencies unable to be compared and not possible to rank them on the same basis. This is a significant drawback of DEA; however literature observed many solutions including the use of common set of weights (CSW). Besides, the conventional DEA methods require accurate measurement of both the inputs and outputs; however, crisp input and output data may not relevant be available in real world applications. This paper develops a new model for the calculation of CSW in fuzzy environments using fuzzy DEA. Further, a numerical example is used to show the validity and efficacy of the proposed model and to compare the results with previous models available in the literature.
Resumo:
Performance evaluation in conventional data envelopment analysis (DEA) requires crisp numerical values. However, the observed values of the input and output data in real-world problems are often imprecise or vague. These imprecise and vague data can be represented by linguistic terms characterised by fuzzy numbers in DEA to reflect the decision-makers' intuition and subjective judgements. This paper extends the conventional DEA models to a fuzzy framework by proposing a new fuzzy additive DEA model for evaluating the efficiency of a set of decision-making units (DMUs) with fuzzy inputs and outputs. The contribution of this paper is threefold: (1) we consider ambiguous, uncertain and imprecise input and output data in DEA, (2) we propose a new fuzzy additive DEA model derived from the a-level approach and (3) we demonstrate the practical aspects of our model with two numerical examples and show its comparability with five different fuzzy DEA methods in the literature. Copyright © 2011 Inderscience Enterprises Ltd.
Resumo:
The increasing intensity of global competition has led organizations to utilize various types of performance measurement tools for improving the quality of their products and services. Data envelopment analysis (DEA) is a methodology for evaluating and measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. All the data in the conventional DEA with input and/or output ratios assumes the form of crisp numbers. However, the observed values of data in real-world problems are sometimes expressed as interval ratios. In this paper, we propose two new models: general and multiplicative non-parametric ratio models for DEA problems with interval data. The contributions of this paper are fourfold: (1) we consider input and output data expressed as interval ratios in DEA; (2) we address the gap in DEA literature for problems not suitable or difficult to model with crisp values; (3) we propose two new DEA models for evaluating the relative efficiencies of DMUs with interval ratios, and (4) we present a case study involving 20 banks with three interval ratios to demonstrate the applicability and efficacy of the proposed models where the traditional indicators are mostly financial ratios. © 2011 Elsevier Inc.