29 resultados para Multi-dimensional database
Resumo:
In 'Avalanche', an object is lowered, players staying in contact throughout. Normally the task is easily accomplished. However, with larger groups counter-intuitive behaviours appear. The paper proposes a formal theory for the underlying causal mechanisms. The aim is to not only provide an explicit, testable hypothesis for the source of the observed modes of behaviour-but also to exemplify the contribution that formal theory building can make to understanding complex social phenomena. Mapping reveals the importance of geometry to the Avalanche game; each player has a pair of balancing loops, one involved in lowering the object, the other ensuring contact. For more players, sets of balancing loops interact and these can allow dominance by reinforcing loops, causing the system to chase upwards towards an ever-increasing goal. However, a series of other effects concerning human physiology and behaviour (HPB) is posited as playing a role. The hypothesis is therefore rigorously tested using simulation. For simplicity a 'One Degree of Freedom' case is examined, allowing all of the effects to be included whilst rendering the analysis more transparent. Formulation and experimentation with the model gives insight into the behaviours. Multi-dimensional rate/level analysis indicates that there is only a narrow region in which the system is able to move downwards. Model runs reproduce the single 'desired' mode of behaviour and all three of the observed 'problematic' ones. Sensitivity analysis gives further insight into the system's modes and their causes. Behaviour is seen to arise only when the geometric effects apply (number of players greater than degrees of freedom of object) in combination with a range of HPB effects. An analogy exists between the co-operative behaviour required here and various examples: conflicting strategic objectives in organizations; Prisoners' Dilemma and integrated bargaining situations. Additionally, the game may be relatable in more direct algebraic terms to situations involving companies in which the resulting behaviours are mediated by market regulations. Finally, comment is offered on the inadequacy of some forms of theory building and the case is made for formal theory building involving the use of models, analysis and plausible explanations to create deep understanding of social phenomena.
Resumo:
A favoured method of assimilating information from state-of-the-art climate models into integrated assessment models of climate impacts is to use the transient climate response (TCR) of the climate models as an input, sometimes accompanied by a pattern matching approach to provide spatial information. More recent approaches to the problem use TCR with another independent piece of climate model output: the land-sea surface warming ratio (φ). In this paper we show why the use of φ in addition to TCR has such utility. Multiple linear regressions of surface temperature change onto TCR and φ in 22 climate models from the CMIP3 multi-model database show that the inclusion of φ explains a much greater fraction of the inter-model variance than using TCR alone. The improvement is particularly pronounced in North America and Eurasia in the boreal summer season, and in the Amazon all year round. The use of φ as the second metric is beneficial for three reasons: firstly it is uncorrelated with TCR in state-of-the-art climate models and can therefore be considered as an independent metric; secondly, because of its projected time-invariance, the magnitude of φ is better constrained than TCR in the immediate future; thirdly, the use of two variables is much simpler than approaches such as pattern scaling from climate models. Finally we show how using the latest estimates of φ from climate models with a mean value of 1.6—as opposed to previously reported values of 1.4—can significantly increase the mean time-integrated discounted damage projections in a state-of-the-art integrated assessment model by about 15 %. When compared to damages calculated without the inclusion of the land-sea warming ratio, this figure rises to 65 %, equivalent to almost 200 trillion dollars over 200 years.
Resumo:
Global communication requirements and load imbalance of some parallel data mining algorithms are the major obstacles to exploit the computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication cost in iterative parallel data mining algorithms. In particular, the analysis focuses on one of the most influential and popular data mining methods, the k-means algorithm for cluster analysis. The straightforward parallel formulation of the k-means algorithm requires a global reduction operation at each iteration step, which hinders its scalability. This work studies a different parallel formulation of the algorithm where the requirement of global communication can be relaxed while still providing the exact solution of the centralised k-means algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real world distributed applications or can be induced by means of multi-dimensional binary search trees. The approach can also be extended to accommodate an approximation error which allows a further reduction of the communication costs.
Resumo:
Empowerment is a standard but ambiguous element of development rhetoric and so, through the socially complex and contested terrain of South Africa, this paper explores its potential to contribute to inclusive development. Investigating micro-level engagements with the national strategy of Broad-Based Black Economic Empowerment (B-BBEE) in the South African wine industry highlights the limitations, but also potential, of this single domain approach. However, latent paternalism, entrenched interests and a ‘dislocated blackness’ maintain a complex racial politics that shapes both power relations and the opportunities for transformation within the industry. Nonetheless, while B-BBEE may not, in reality, be broad-based its manifestations are contributing to challenging racist structures and normalising changing attitudes. This paper concludes that, to be transformative, empowerment needs to be re-embedded within South Africa as a multi-scalar, multi-dimensional dialogue and, despite the continuation of structural constraints, positions the local as a critical scale at which to initiate broader social change.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.
Resumo:
The purpose of this paper is to investigate several analytical methods of solving first passage (FP) problem for the Rouse model, a simplest model of a polymer chain. We show that this problem has to be treated as a multi-dimensional Kramers' problem, which presents rich and unexpected behavior. We first perform direct and forward-flux sampling (FFS) simulations, and measure the mean first-passage time $\tau(z)$ for the free end to reach a certain distance $z$ away from the origin. The results show that the mean FP time is getting faster if the Rouse chain is represented by more beads. Two scaling regimes of $\tau(z)$ are observed, with transition between them varying as a function of chain length. We use these simulations results to test two theoretical approaches. One is a well known asymptotic theory valid in the limit of zero temperature. We show that this limit corresponds to fully extended chain when each chain segment is stretched, which is not particularly realistic. A new theory based on the well known Freidlin-Wentzell theory is proposed, where dynamics is projected onto the minimal action path. The new theory predicts both scaling regimes correctly, but fails to get the correct numerical prefactor in the first regime. Combining our theory with the FFS simulations lead us to a simple analytical expression valid for all extensions and chain lengths. One of the applications of polymer FP problem occurs in the context of branched polymer rheology. In this paper, we consider the arm-retraction mechanism in the tube model, which maps exactly on the model we have solved. The results are compared to the Milner-McLeish theory without constraint release, which is found to overestimate FP time by a factor of 10 or more.
Resumo:
This study investigates flash flood forecast and warning communication, interpretation, and decision making, using data from a survey of 418 members of the public in Boulder, Colorado, USA. Respondents to the public survey varied in their perceptions and understandings of flash flood risks in Boulder, and some had misconceptions about flash flood risks, such as the safety of crossing fast-flowing water. About 6% of respondents indicated consistent reversals of US watch-warning alert terminology. However, more in-depth analysis illustrates the multi-dimensional, situationally dependent meanings of flash flood alerts, as well as the importance of evaluating interpretation and use of warning information along with alert terminology. Some public respondents estimated low likelihoods of flash flooding given a flash flood warning; these were associated with lower anticipated likelihood of taking protective action given a warning. Protective action intentions were also lower among respondents who had less trust in flash flood warnings, those who had not made prior preparations for flash flooding, and those who believed themselves to be safer from flash flooding. Additional analysis, using open-ended survey questions about responses to warnings, elucidates the complex, contextual nature of protective decision making during flash flood threats. These findings suggest that warnings can play an important role not only by notifying people that there is a threat and helping motivate people to take protective action, but also by helping people evaluate what actions to take given their situation.
Resumo:
We describe the use of bivariate 3d empirical orthogonal functions (EOFs) in characterising low frequency variability of the Atlantic thermohaline circulation (THC) in the Hadley Centre global climate model, HadCM3. We find that the leading two modes are well correlated with an index of the meridional overturning circulation (MOC) on decadal timescales, with the leading mode alone accounting for 54% of the decadal variance. Episodes of coherent oscillations in the sub-space of the leading EOFs are identified; these episodes are of great interest for the predictability of the THC, and could indicate the existence of different regimes of natural variability. The mechanism identified for the multi-decadal variability is an internal ocean mode, dominated by changes in convection in the Nordic Seas, which lead the changes in the MOC by a few years. Variations in salinity transports from the Arctic and from the North Atlantic are the main feedbacks which control the oscillation. This mode has a weak feedback onto the atmosphere and hence a surface climatic influence. Interestingly, some of these climate impacts lead the changes in the overturning. There are also similarities to observed multi-decadal climate variability.
Resumo:
Approximations to the scattering of linear surface gravity waves on water of varying quiescent depth are Investigated by means of a variational approach. Previous authors have used wave modes associated with the constant depth case to approximate the velocity potential, leading to a system of coupled differential equations. Here it is shown that a transformation of the dependent variables results in a much simplified differential equation system which in turn leads to a new multi-mode 'mild-slope' approximation. Further, the effect of adding a bed mode is examined and clarified. A systematic analytic method is presented for evaluating inner products that arise and numerical experiments for two-dimensional scattering are used to examine the performance of the new approximations.
Resumo:
If the fundamental precepts of Farming Systems Research were to be taken literally then it would imply that for each farm 'unique' solutions should be sought. This is an unrealistic expectation, but it has led to the idea of a recommendation domain, implying creating a taxonomy of farms, in order to increase the general applicability of recommendations. Mathematical programming models are an established means of generating recommended solutions, but for such models to be effective they have to be constructed for 'truly' typical or representative situations. The multi-variate statistical techniques provide a means of creating the required typologies, particularly when an exhaustive database is available. This paper illustrates the application of this methodology in two different studies that shared the common purpose of identifying types of farming systems in their respective study areas. The issues related with the use of factor and cluster analyses for farm typification prior to building representative mathematical programming models for Chile and Pakistan are highlighted. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Robotic and manual methods have been used to obtain identification of significantly changing proteins regulated when Schizosaccharomyces pombe is exposed to oxidative stress. Differently treated S. pombe cells were lysed, labelled with CyDye and analysed by two-dimensional difference gel electrophoresis. Gel images analysed off-line, using the DeCyder image analysis software [GE Healthcare, Amersham, UK] allowed selection of significantly regulated proteins. Proteins displaying differential expression were excised robotically for manual digestion and identified by matrix-assisted laser desorption/ionisation - mass spectrometry (MALDI-MS). Additionally the same set of proteins displaying differential expression were automatically cut and digested using a prototype robotic platform. Automated MALDI-MS, peak label assignment and database searching were utilised to identify as many proteins as possible. The results achieved by the robotic system were compared to manual methods. The identification of all significantly altered proteins provides an annotated peroxide stress-related proteome that can be used as a base resource against which other stress-induced proteomic changes can be compared.
Resumo:
Robotic and manual methods have been used to obtain identification of significantly changing proteins regulated when Schizosaccharomyces pombe is exposed to oxidative stress. Differently treated S. pombe cells were lysed, labelled with CyDye (TM) and analysed by two-dimensional difference gel. electrophoresis. Gel images analysed off-line, using the DeCyder (TM) image analysis software [GE Healthcare, Amersham, UK] allowed selection of significantly regulated proteins. Proteins displaying differential expression were excised robotically for manual digestion and identified by matrix-assisted laser desorption/ionisation - mass spectrometry (MALDI-MS). Additionally the same set of proteins displaying differential expression were automatically cut and digested using a prototype robotic platform. Automated MALDI-MS, peak label assignment and database searching were utilised to identify as many proteins as possible. The results achieved by the robotic system were compared to manual methods. The identification of all significantly altered proteins provides an annotated peroxide stress-related proteome that can be used as a base resource against which other stress-induced proteomic changes can be compared.
Resumo:
A continuous tropospheric and stratospheric vertically resolved ozone time series, from 1850 to 2099, has been generated to be used as forcing in global climate models that do not include interactive chemistry. A multiple linear regression analysis of SAGE I+II satellite observations and polar ozonesonde measurements is used for the stratospheric zonal mean dataset during the well-observed period from 1979 to 2009. In addition to terms describing the mean annual cycle, the regression includes terms representing equivalent effective stratospheric chlorine (EESC) and the 11-yr solar cycle variability. The EESC regression fit coefficients, together with pre-1979 EESC values, are used to extrapolate the stratospheric ozone time series backward to 1850. While a similar procedure could be used to extrapolate into the future, coupled chemistry climate model (CCM) simulations indicate that future stratospheric ozone abundances are likely to be significantly affected by climate change, and capturing such effects through a regression model approach is not feasible. Therefore, the stratospheric ozone dataset is extended into the future (merged in 2009) with multimodel mean projections from 13 CCMs that performed a simulation until 2099 under the SRES (Special Report on Emission Scenarios) A1B greenhouse gas scenario and the A1 adjusted halogen scenario in the second round of the Chemistry-Climate Model Validation (CCMVal-2) Activity. The stratospheric zonal mean ozone time series is merged with a three-dimensional tropospheric data set extracted from simulations of the past by two CCMs (CAM3.5 and GISSPUCCINI)and of the future by one CCM (CAM3.5). The future tropospheric ozone time series continues the historical CAM3.5 simulation until 2099 following the four different Representative Concentration Pathways (RCPs). Generally good agreement is found between the historical segment of the ozone database and satellite observations, although it should be noted that total column ozone is overestimated in the southern polar latitudes during spring and tropospheric column ozone is slightly underestimated. Vertical profiles of tropospheric ozone are broadly consistent with ozonesondes and in-situ measurements, with some deviations in regions of biomass burning. The tropospheric ozone radiative forcing (RF) from the 1850s to the 2000s is 0.23Wm−2, lower than previous results. The lower value is mainly due to (i) a smaller increase in biomass burning emissions; (ii) a larger influence of stratospheric ozone depletion on upper tropospheric ozone at high southern latitudes; and possibly (iii) a larger influence of clouds (which act to reduce the net forcing) compared to previous radiative forcing calculations. Over the same period, decreases in stratospheric ozone, mainly at high latitudes, produce a RF of −0.08Wm−2, which is more negative than the central Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) value of −0.05Wm−2, but which is within the stated range of −0.15 to +0.05Wm−2. The more negative value is explained by the fact that the regression model simulates significant ozone depletion prior to 1979, in line with the increase in EESC and as confirmed by CCMs, while the AR4 assumed no change in stratospheric RF prior to 1979. A negative RF of similar magnitude persists into the future, although its location shifts from high latitudes to the tropics. This shift is due to increases in polar stratospheric ozone, but decreases in tropical lower stratospheric ozone, related to a strengthening of the Brewer-Dobson circulation, particularly through the latter half of the 21st century. Differences in trends in tropospheric ozone among the four RCPs are mainly driven by different methane concentrations, resulting in a range of tropospheric ozone RFs between 0.4 and 0.1Wm−2 by 2100. The ozone dataset described here has been released for the Coupled Model Intercomparison Project (CMIP5) model simulations in netCDF Climate and Forecast (CF) Metadata Convention at the PCMDI website (http://cmip-pcmdi.llnl.gov/).
Resumo:
Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.