675 resultados para ”real world mathematics”
Resumo:
This special volume offers a collection of papers that examine challenges and solutions where water meets complex, intersections with women, waste, wisdom or wealth. This unique array of articles offer readers of the Journal of Cleaner Production multidisciplinary views of water issues involving physical and structural perspectives, as well as political, social, cultural and increasingly serious environmental challenges. By building upon extensive literature reviews along with data collected through empirical study and real world observations, the authors effectively present valuable insights into the depth and nature of many of the problems but also present a well-developed array of recommendations, based upon successful projects and programs, world-wide. Among the recommendations are proposals for policies, approaches and regulations that provide system enhancements to prevent pollution and contamination and ideas to monitor and regulate water consumption. This international collection includes studies from 15 countries, documented and written by an equal number of female and male authors.
Resumo:
From emails relating to adoption over the Internet to discussions in the airline cockpit, the spoken or written texts we produce can have significant social consequences. The area of Mediated Discourse Analysis considers texts in their social and cultural contexts to explore the actions individuals take with texts - and the consequences of those actions. Discourse in Action: brings together leading scholars from around the world in the area of Mediated Discourse Analysis reveals ways in which its theory and methodology can be used in research into contemporary social situations explores real situations and draws on real data in each chapter shows how analysis of texts in their social contexts broadens our understanding of the real world. Taken together, the chapters provide a comprehensive overview to the field and present a range of current studies that address some of the most important questions facing students and researchers in linguistics, education, communication studies and other fields.
Resumo:
Background: Health care literature supports the development of accessible interventions that integrate behavioral economics, wearable devices, principles of evidence-based behavior change, and community support. However, there are limited real-world examples of large scale, population-based, member-driven reward platforms. Subsequently, a paucity of outcome data exists and health economic effects remain largely theoretical. To complicate matters, an emerging area of research is defining the role of Superusers, the small percentage of unusually engaged digital health participants who may influence other members. Objective: The objective of this preliminary study is to analyze descriptive data from GOODcoins, a self-guided, free-to-consumer engagement and rewards platform incentivizing walking, running and cycling. Registered members accessed the GOODcoins platform through PCs, tablets or mobile devices, and had the opportunity to sync wearables to track activity. Following registration, members were encouraged to join gamified group challenges and compare their progress with that of others. As members met challenge targets, they were rewarded with GOODcoins, which could be redeemed for planet- or people-friendly products. Methods: Outcome data were obtained from the GOODcoins custom SQL database. The reporting period was December 1, 2014 to May 1, 2015. Descriptive self-report data were analyzed using MySQL and MS Excel. Results: The study period includes data from 1298 users who were connected to an exercise tracking device. Females consisted of 52.6% (n=683) of the study population, 33.7% (n=438) were between the ages of 20-29, and 24.8% (n=322) were between the ages of 30-39. 77.5% (n=1006) of connected and active members met daily-recommended physical activity guidelines of 30 minutes, with a total daily average activity of 107 minutes (95% CI 90, 124). Of all connected and active users, 96.1% (n=1248) listed walking as their primary activity. For members who exchanged GOODcoins, the mean balance was 4,000 (95% CI 3850, 4150) at time of redemption, and 50.4% (n=61) of exchanges were for fitness or outdoor products, while 4.1% (n=5) were for food-related items. Participants were most likely to complete challenges when rewards were between 201-300 GOODcoins. Conclusions: The purpose of this study is to form a baseline for future research. Overall, results indicate that challenges and incentives may be effective for connected and active members, and may play a role in achieving daily-recommended activity guidelines. Registrants were typically younger, walking was the primary activity, and rewards were mainly exchanged for fitness or outdoor products. Remaining to be determined is whether members were already physically active at time of registration and are representative of healthy adherers, or were previously inactive and were incentivized to change their behavior. As challenges are gamified, there is an opportunity to investigate the role of superusers and healthy adherers, impacts on behavioral norms, and how cooperative games and incentives can be leveraged across stratified populations. Study limitations and future research agendas are discussed.
Resumo:
Alfred Chandler, the celebrated business historian, provided detailed descriptions of the reasons for failed human commitments and the managerial tools needed to prevent/remediate such failings in the context of large business firms. Chandler's historical narrative identifies three distinct “faces” of bounded reliability—opportunism, benevolent preference reversal, and identity-based discordance—as the main drivers of commitment failure. Adopting bounded reliability (BRel) as a micro-foundation in management studies will raise the quality and relevance of scholarly recommendations to improve managerial decision making and action, because analysis of BRel challenges closely mirrors the real-world problems facing practicing managers.
Resumo:
Exploiting the observed robust relationships between temperature and optical depth in extratropical clouds, we calculate the shortwave cloud feedback from historical data, by regressing observed and modeled cloud property histograms onto local temperature in middle to high southern latitudes. In this region, all CMIP5 models and observational data sets predict a negative cloud feedback, mainly driven by optical thickening. Between 45° and 60°S, the mean observed shortwave feedback (−0.91 ± 0.82 W m−2 K−1, relative to local rather than global mean warming) is very close to the multimodel mean feedback in RCP8.5 (−0.98 W m−2 K−1), despite differences in the meridional structure. In models, historical temperature-cloud property relationships reliably predict the forced RCP8.5 response. Because simple theory predicts this optical thickening with warming, and cloud amount changes are relatively small, we conclude that the shortwave cloud feedback is very likely negative in the real world at middle to high latitudes.
Resumo:
Observations and climate models suggest significant decadal variability within the North Atlantic subpolar gyre (NA SPG), though observations are sparse and models disagree on the details of this variability. Therefore, it is important to understand 1) the mechanisms of simulated decadal variability, 2) which parts of simulated variability are more faithful representations of reality, and 3) the implications for climate predictions. Here, we investigate the decadal variability in the NA SPG in the state-of-the-art, high resolution (0.25◦ ocean resolution), climate model ‘HadGEM3’. We find a decadal mode with a period of 17 years that explains 30% of the annual variance in related indices. The mode arises due to the advection of heat content anomalies, and shows asymmetries in the timescale of phase reversal between positive and negative phases. A negative feedback from temperature-driven density anomalies in the Labrador Sea (LS) allows for the phase reversal. The North Atlantic Oscillation (NAO), which exhibits the same periodicity, amplifies the mode. The atmosphere-ocean coupling is stronger during positive rather than negative NAO states, explaining the asymmetry. Within the NA SPG, there is potential predictability arising partly from this mode for up to 5 years. There are important similarities between observed and simulated variability, such as the apparent role for the propagation of heat content anomalies. However, observations suggest interannual LS density anomalies are salinity-driven. Salinity control of density would change the temperature feedback to the south, possibly limiting real-world predictive skill in the southern NA SPG with this model. Finally, to understand the diversity of behaviours, we analyse 42 present-generation climate models. Temperature and salinity biases are found to systematically influence the driver of density variability in the LS. Resolution is a good predictor of the biases. The dependence of variability on the background state has important implications for decadal predictions.
Resumo:
This thesis examines three different, but related problems in the broad area of portfolio management for long-term institutional investors, and focuses mainly on the case of pension funds. The first idea (Chapter 3) is the application of a novel numerical technique – robust optimization – to a real-world pension scheme (the Universities Superannuation Scheme, USS) for first time. The corresponding empirical results are supported by many robustness checks and several benchmarks such as the Bayes-Stein and Black-Litterman models that are also applied for first time in a pension ALM framework, the Sharpe and Tint model and the actual USS asset allocations. The second idea presented in Chapter 4 is the investigation of whether the selection of the portfolio construction strategy matters in the SRI industry, an issue of great importance for long term investors. This study applies a variety of optimal and naïve portfolio diversification techniques to the same SRI-screened universe, and gives some answers to the question of which portfolio strategies tend to create superior SRI portfolios. Finally, the third idea (Chapter 5) compares the performance of a real-world pension scheme (USS) before and after the recent major changes in the pension rules under different dynamic asset allocation strategies and the fixed-mix portfolio approach and quantifies the redistributive effects between various stakeholders. Although this study deals with a specific pension scheme, the methodology can be applied by other major pension schemes in countries such as the UK and USA that have changed their rules.
Resumo:
Protein–ligand binding site prediction methods aim to predict, from amino acid sequence, protein–ligand interactions, putative ligands, and ligand binding site residues using either sequence information, structural information, or a combination of both. In silico characterization of protein–ligand interactions has become extremely important to help determine a protein’s functionality, as in vivo-based functional elucidation is unable to keep pace with the current growth of sequence databases. Additionally, in vitro biochemical functional elucidation is time-consuming, costly, and may not be feasible for large-scale analysis, such as drug discovery. Thus, in silico prediction of protein–ligand interactions must be utilized to aid in functional elucidation. Here, we briefly discuss protein function prediction, prediction of protein–ligand interactions, the Critical Assessment of Techniques for Protein Structure Prediction (CASP) and the Continuous Automated EvaluatiOn (CAMEO) competitions, along with their role in shaping the field. We also discuss, in detail, our cutting-edge web-server method, FunFOLD for the structurally informed prediction of protein–ligand interactions. Furthermore, we provide a step-by-step guide on using the FunFOLD web server and FunFOLD3 downloadable application, along with some real world examples, where the FunFOLD methods have been used to aid functional elucidation.
Resumo:
Subspace clustering groups a set of samples from a union of several linear subspaces into clusters, so that the samples in the same cluster are drawn from the same linear subspace. In the majority of the existing work on subspace clustering, clusters are built based on feature information, while sample correlations in their original spatial structure are simply ignored. Besides, original high-dimensional feature vector contains noisy/redundant information, and the time complexity grows exponentially with the number of dimensions. To address these issues, we propose a tensor low-rank representation (TLRR) and sparse coding-based (TLRRSC) subspace clustering method by simultaneously considering feature information and spatial structures. TLRR seeks the lowest rank representation over original spatial structures along all spatial directions. Sparse coding learns a dictionary along feature spaces, so that each sample can be represented by a few atoms of the learned dictionary. The affinity matrix used for spectral clustering is built from the joint similarities in both spatial and feature spaces. TLRRSC can well capture the global structure and inherent feature information of data, and provide a robust subspace segmentation from corrupted data. Experimental results on both synthetic and real-world data sets show that TLRRSC outperforms several established state-of-the-art methods.
Resumo:
Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.
Resumo:
Searching in a dataset for elements that are similar to a given query element is a core problem in applications that manage complex data, and has been aided by metric access methods (MAMs). A growing number of applications require indices that must be built faster and repeatedly, also providing faster response for similarity queries. The increase in the main memory capacity and its lowering costs also motivate using memory-based MAMs. In this paper. we propose the Onion-tree, a new and robust dynamic memory-based MAM that slices the metric space into disjoint subspaces to provide quick indexing of complex data. It introduces three major characteristics: (i) a partitioning method that controls the number of disjoint subspaces generated at each node; (ii) a replacement technique that can change the leaf node pivots in insertion operations; and (iii) range and k-NN extended query algorithms to support the new partitioning method, including a new visit order of the subspaces in k-NN queries. Performance tests with both real-world and synthetic datasets showed that the Onion-tree is very compact. Comparisons of the Onion-tree with the MM-tree and a memory-based version of the Slim-tree showed that the Onion-tree was always faster to build the index. The experiments also showed that the Onion-tree significantly improved range and k-NN query processing performance and was the most efficient MAM, followed by the MM-tree, which in turn outperformed the Slim-tree in almost all the tests. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this article we propose a 0-1 optimization model to determine a crop rotation schedule for each plot in a cropping area. The rotations have the same duration in all the plots and the crops are selected to maximize plot occupation. The crops may have different production times and planting dates. The problem includes planting constraints for adjacent plots and also for sequences of crops in the rotations. Moreover, cultivating crops for green manuring and fallow periods are scheduled into each plot. As the model has, in general, a great number of constraints and variables, we propose a heuristics based on column generation. To evaluate the performance of the model and the method, computational experiments using real-world data were performed. The solutions obtained indicate that the method generates good results.
Resumo:
We consider an agricultural production problem, in which one must meet a known demand of crops while respecting ecologically-based production constraints. The problem is twofold: in order to meet the demand, one must determine the division of the available heterogeneous arable areas in plots and, for each plot, obtain an appropriate crop rotation schedule. Rotation plans must respect ecologically-based constraints such as the interdiction of certain crop successions, and the regular insertion of fallows and green manures. We propose a linear formulation for this problem, in which each variable is associated with a crop rotation schedule. The model may include a large number of variables and it is, therefore, solved by means of a column-generation approach. We also discuss some extensions to the model, in order to incorporate additional characteristics found in field conditions. A set of computational tests using instances based on real-world data confirms the efficacy of the proposed methodology. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Complex networks obtained from real-world networks are often characterized by incompleteness and noise, consequences of imperfect sampling as well as artifacts in the acquisition process. Because the characterization, analysis and modeling of complex systems underlain by complex networks are critically affected by the quality and completeness of the respective initial structures, it becomes imperative to devise methodologies for identifying and quantifying the effects of the sampling on the network structure. One way to evaluate these effects is through an analysis of the sensitivity of complex network measurements to perturbations in the topology of the network. In this paper, measurement sensibility is quantified in terms of the relative entropy of the respective distributions. Three particularly important kinds of progressive perturbations to the network are considered, namely, edge suppression, addition and rewiring. The measurements allowing the best balance of stability (smaller sensitivity to perturbations) and discriminability (separation between different network topologies) are identified with respect to each type of perturbation. Such an analysis includes eight different measurements applied on six different complex networks models and three real-world networks. This approach allows one to choose the appropriate measurements in order to obtain accurate results for networks where sampling bias cannot be avoided-a very frequent situation in research on complex networks.
Resumo:
A new complex network model is proposed which is founded on growth, with new connections being established proportionally to the current dynamical activity of each node, which can be understood as a generalization of the Barabasi-Albert static model. By using several topological measurements, as well as optimal multivariate methods (canonical analysis and maximum likelihood decision), we show that this new model provides, among several other theoretical kinds of networks including Watts-Strogatz small-world networks, the greatest compatibility with three real-world cortical networks.