996 resultados para Big Pizza


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study investigated the relationship between the Big 5, measured at factor and facet levels, and dimensions of both psychological and subjective well-being. Three hundred and thirty-seven participants completed the 30 Facet International Personality Item Pool Scale, Satisfaction with Life Scale, Positive and Negative Affectivity Schedule, and Ryff’s Scales of Psychological Well-Being. Cross-correlation decomposition presented a parsimonious picture of how well-being is related to personality factors. Incremental facet prediction was examined using double-adjusted r2 confidence intervals and semi-partial correlations. Incremental prediction by facets over factors ranged from almost nothing to a third more variance explained, suggesting a more modest incremental prediction than presented in the literature previously. Examination of semi-partial correlations controlling for factors revealed a small number of important facet-well-being correlations. All data and R analysis scripts are made available in an online repository.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces and investigates large iterative multitier ensemble (LIME) classifiers specifically tailored for big data. These classifiers are very large, but are quite easy to generate and use. They can be so large that it makes sense to use them only for big data. They are generated automatically as a result of several iterations in applying ensemble meta classifiers. They incorporate diverse ensemble meta classifiers into several tiers simultaneously and combine them into one automatically generated iterative system so that many ensemble meta classifiers function as integral parts of other ensemble meta classifiers at higher tiers. In this paper, we carry out a comprehensive investigation of the performance of LIME classifiers for a problem concerning security of big data. Our experiments compare LIME classifiers with various base classifiers and standard ordinary ensemble meta classifiers. The results obtained demonstrate that LIME classifiers can significantly increase the accuracy of classifications. LIME classifiers performed better than the base classifiers and standard ensemble meta classifiers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Big data presents a remarkable opportunity for organisations to obtain critical intelligence to drive decisions and obtain insights as never before. However, big data generates high network traffic. Moreover, the continuous growth in the variety of network traffic due to big data variety has rendered the network to be one of the key big data challenges. In this article, we present a comprehensive analysis of big data variety and its adverse effects on the network performance. We present taxonomy of big data variety and discuss various dimensions of the big data variety features. We also discuss how the features influence the interconnection network requirements. Finally, we discuss some of the challenges each big data variety dimension presents and possible approach to address them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stochastic search techniques such as evolutionary algorithms (EA) are known to be better explorer of search space as compared to conventional techniques including deterministic methods. However, in the era of big data like most other search methods and learning algorithms, suitability of evolutionary algorithms is naturally questioned. Big data pose new computational challenges including very high dimensionality and sparseness of data. Evolutionary algorithms' superior exploration skills should make them promising candidates for handling optimization problems involving big data. High dimensional problems introduce added complexity to the search space. However, EAs need to be enhanced to ensure that majority of the potential winner solutions gets the chance to survive and mature. In this paper we present an evolutionary algorithm with enhanced ability to deal with the problems of high dimensionality and sparseness of data. In addition to an informed exploration of the solution space, this technique balances exploration and exploitation using a hierarchical multi-population approach. The proposed model uses informed genetic operators to introduce diversity by expanding the scope of search process at the expense of redundant less promising members of the population. Next phase of the algorithm attempts to deal with the problem of high dimensionality by ensuring broader and more exhaustive search and preventing premature death of potential solutions. To achieve this, in addition to the above exploration controlling mechanism, a multi-tier hierarchical architecture is employed, where, in separate layers, the less fit isolated individuals evolve in dynamic sub-populations that coexist alongside the original or main population. Evaluation of the proposed technique on well known benchmark problems ascertains its superior performance. The algorithm has also been successfully applied to a real world problem of financial portfolio management. Although the proposed method cannot be considered big data-ready, it is certainly a move in the right direction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, the Big Data paradigm has received considerable attention since it gives a great opportunity to mine knowledge from massive amounts of data. However, the new mined knowledge will be useless if data is fake, or sometimes the massive amounts of data cannot be collected due to the worry on the abuse of data. This situation asks for new security solutions. On the other hand, the biggest feature of Big Data is "massive", which requires that any security solution for Big Data should be "efficient". In this paper, we propose a new identity-based generalized signcryption scheme to solve the above problems. In particular, it has the following two properties to fit the efficiency requirement. (1) It can work as an encryption scheme, a signature scheme or a signcryption scheme as per need. (2) It does not have the heavy burden on the complicated certificate management as the traditional cryptographic schemes. Furthermore, our proposed scheme can be proven-secure in the standard model. © 2014 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article argues that big media in Australia promote three myths about rural and regional news in Australia as part of their case to deregulate the industry. These myths are that geography no longer matters in local news; that big media are the only ones who can save regional news; and that people in regional Australia can access less news that their city counterparts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To examine Corporate Social Responsibility (CSR) tactics by identifying the key characteristics of CSR strategies as described in the corporate documents of selected 'Big Food' companies. METHODS: A mixed methods content analysis was used to analyse the information contained on Australian Big Food company websites. Data sources included company CSR reports and web-based content that related to CSR initiatives employed in Australia. RESULTS: A total of 256 CSR activities were identified across six organisations. Of these, the majority related to the categories of environment (30.5%), responsibility to consumers (25.0%) or community (19.5%). CONCLUSIONS: Big Food companies appear to be using CSR activities to: 1) build brand image through initiatives associated with the environment and responsibility to consumers; 2) target parents and children through community activities; and 3) align themselves with respected organisations and events in an effort to transfer their positive image attributes to their own brands. IMPLICATIONS: Results highlight the type of CSR strategies Big Food companies are employing. These findings serve as a guide to mapping and monitoring CSR as a specific form of marketing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the effects of investor protection, firm informational problems (proxied by firm size, firm age, and the number of analysts following), and Big N auditors on firms' cost of debt around the world. Using data from 1994 to 2006 and over 90,000 firm-year observations, we find that the cost of debt is lower when firms are audited by Big N auditors, especially in countries with strong investor protection. Second, we find that firms with more informational problems (i.e., higher information asymmetry problems) benefit more from Big N auditors in terms of lower cost of debt only in countries with stronger investor protection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Smart grid is a technological innovation that improves efficiency, reliability, economics, and sustainability of electricity services. It plays a crucial role in modern energy infrastructure. The main challenges of smart grids, however, are how to manage different types of front-end intelligent devices such as power assets and smart meters efficiently; and how to process a huge amount of data received from these devices. Cloud computing, a technology that provides computational resources on demands, is a good candidate to address these challenges since it has several good properties such as energy saving, cost saving, agility, scalability, and flexibility. In this paper, we propose a secure cloud computing based framework for big data information management in smart grids, which we call 'Smart-Frame.' The main idea of our framework is to build a hierarchical structure of cloud computing centers to provide different types of computing services for information management and big data analysis. In addition to this structural framework, we present a security solution based on identity-based encryption, signature and proxy re-encryption to address critical security issues of the proposed framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As a leading framework for processing and analyzing big data, MapReduce is leveraged by many enterprises to parallelize their data processing on distributed computing systems. Unfortunately, the all-to-all data forwarding from map tasks to reduce tasks in the traditional MapReduce framework would generate a large amount of network traffic. The fact that the intermediate data generated by map tasks can be combined with significant traffic reduction in many applications motivates us to propose a data aggregation scheme for MapReduce jobs in cloud. Specifically, we design an aggregation architecture under the existing MapReduce framework with the objective of minimizing the data traffic during the shuffle phase, in which aggregators can reside anywhere in the cloud. Some experimental results also show that our proposal outperforms existing work by reducing the network traffic significantly.