98 resultados para BIG-BANG NUCLEOSYNTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stochastic search techniques such as evolutionary algorithms (EA) are known to be better explorer of search space as compared to conventional techniques including deterministic methods. However, in the era of big data like most other search methods and learning algorithms, suitability of evolutionary algorithms is naturally questioned. Big data pose new computational challenges including very high dimensionality and sparseness of data. Evolutionary algorithms' superior exploration skills should make them promising candidates for handling optimization problems involving big data. High dimensional problems introduce added complexity to the search space. However, EAs need to be enhanced to ensure that majority of the potential winner solutions gets the chance to survive and mature. In this paper we present an evolutionary algorithm with enhanced ability to deal with the problems of high dimensionality and sparseness of data. In addition to an informed exploration of the solution space, this technique balances exploration and exploitation using a hierarchical multi-population approach. The proposed model uses informed genetic operators to introduce diversity by expanding the scope of search process at the expense of redundant less promising members of the population. Next phase of the algorithm attempts to deal with the problem of high dimensionality by ensuring broader and more exhaustive search and preventing premature death of potential solutions. To achieve this, in addition to the above exploration controlling mechanism, a multi-tier hierarchical architecture is employed, where, in separate layers, the less fit isolated individuals evolve in dynamic sub-populations that coexist alongside the original or main population. Evaluation of the proposed technique on well known benchmark problems ascertains its superior performance. The algorithm has also been successfully applied to a real world problem of financial portfolio management. Although the proposed method cannot be considered big data-ready, it is certainly a move in the right direction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, the Big Data paradigm has received considerable attention since it gives a great opportunity to mine knowledge from massive amounts of data. However, the new mined knowledge will be useless if data is fake, or sometimes the massive amounts of data cannot be collected due to the worry on the abuse of data. This situation asks for new security solutions. On the other hand, the biggest feature of Big Data is "massive", which requires that any security solution for Big Data should be "efficient". In this paper, we propose a new identity-based generalized signcryption scheme to solve the above problems. In particular, it has the following two properties to fit the efficiency requirement. (1) It can work as an encryption scheme, a signature scheme or a signcryption scheme as per need. (2) It does not have the heavy burden on the complicated certificate management as the traditional cryptographic schemes. Furthermore, our proposed scheme can be proven-secure in the standard model. © 2014 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dr Sergie Bang is a Papua New Guinean who studied in Australia at the University of Western Australia in 1988-1993. He studied on an AIDAB Scholarship and completed a PhD in Agriculture. The interview is conducted in English by Dr Jonathan Ritchie of Deakin University, and Dr Musawe Sinebare of the Pacific Adventist University. This set comprises: an interview recording, a timed summary, and a photograph.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article argues that big media in Australia promote three myths about rural and regional news in Australia as part of their case to deregulate the industry. These myths are that geography no longer matters in local news; that big media are the only ones who can save regional news; and that people in regional Australia can access less news that their city counterparts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To examine Corporate Social Responsibility (CSR) tactics by identifying the key characteristics of CSR strategies as described in the corporate documents of selected 'Big Food' companies. METHODS: A mixed methods content analysis was used to analyse the information contained on Australian Big Food company websites. Data sources included company CSR reports and web-based content that related to CSR initiatives employed in Australia. RESULTS: A total of 256 CSR activities were identified across six organisations. Of these, the majority related to the categories of environment (30.5%), responsibility to consumers (25.0%) or community (19.5%). CONCLUSIONS: Big Food companies appear to be using CSR activities to: 1) build brand image through initiatives associated with the environment and responsibility to consumers; 2) target parents and children through community activities; and 3) align themselves with respected organisations and events in an effort to transfer their positive image attributes to their own brands. IMPLICATIONS: Results highlight the type of CSR strategies Big Food companies are employing. These findings serve as a guide to mapping and monitoring CSR as a specific form of marketing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the effects of investor protection, firm informational problems (proxied by firm size, firm age, and the number of analysts following), and Big N auditors on firms' cost of debt around the world. Using data from 1994 to 2006 and over 90,000 firm-year observations, we find that the cost of debt is lower when firms are audited by Big N auditors, especially in countries with strong investor protection. Second, we find that firms with more informational problems (i.e., higher information asymmetry problems) benefit more from Big N auditors in terms of lower cost of debt only in countries with stronger investor protection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Smart grid is a technological innovation that improves efficiency, reliability, economics, and sustainability of electricity services. It plays a crucial role in modern energy infrastructure. The main challenges of smart grids, however, are how to manage different types of front-end intelligent devices such as power assets and smart meters efficiently; and how to process a huge amount of data received from these devices. Cloud computing, a technology that provides computational resources on demands, is a good candidate to address these challenges since it has several good properties such as energy saving, cost saving, agility, scalability, and flexibility. In this paper, we propose a secure cloud computing based framework for big data information management in smart grids, which we call 'Smart-Frame.' The main idea of our framework is to build a hierarchical structure of cloud computing centers to provide different types of computing services for information management and big data analysis. In addition to this structural framework, we present a security solution based on identity-based encryption, signature and proxy re-encryption to address critical security issues of the proposed framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As a leading framework for processing and analyzing big data, MapReduce is leveraged by many enterprises to parallelize their data processing on distributed computing systems. Unfortunately, the all-to-all data forwarding from map tasks to reduce tasks in the traditional MapReduce framework would generate a large amount of network traffic. The fact that the intermediate data generated by map tasks can be combined with significant traffic reduction in many applications motivates us to propose a data aggregation scheme for MapReduce jobs in cloud. Specifically, we design an aggregation architecture under the existing MapReduce framework with the objective of minimizing the data traffic during the shuffle phase, in which aggregators can reside anywhere in the cloud. Some experimental results also show that our proposal outperforms existing work by reducing the network traffic significantly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Australian Child Support Scheme was established as a means of ensuring adequate financial support for children of separated parents. However, within the financial transfer of child support exist notions of ‘trust’ and ‘fairness’ based on parents navigating their changed relationship post-separation. Previous research has explored the assessment and outcomes of child support for both payee and payer parents, however little attention has been given to how women evaluate the assessment and outcomes of child support. As such, this research aimed to explore payee mothers’ evaluation of their child support experiences based on the value of their child support assessment and the extent to which these payments were received. Following the methods of constructivist grounded theory, in-depth interviews were conducted with 20 low-income single mothers. Analysis revealed that payee mothers evaluated child support based on the moral assumptions and the rationalities they perceived were underlying payer fathers’ child support compliance. While payee mothers desired arrangements that reflected joint parental financial responsibility, in reality many experienced problematic child support payments, which may ultimately undermine payee parents’ confidence in the Child Support Scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the explosion of big data, processing large numbers of continuous data streams, i.e., big data stream processing (BDSP), has become a crucial requirement for many scientific and industrial applications in recent years. By offering a pool of computation, communication and storage resources, public clouds, like Amazon's EC2, are undoubtedly the most efficient platforms to meet the ever-growing needs of BDSP. Public cloud service providers usually operate a number of geo-distributed datacenters across the globe. Different datacenter pairs are with different inter-datacenter network costs charged by Internet Service Providers (ISPs). While, inter-datacenter traffic in BDSP constitutes a large portion of a cloud provider's traffic demand over the Internet and incurs substantial communication cost, which may even become the dominant operational expenditure factor. As the datacenter resources are provided in a virtualized way, the virtual machines (VMs) for stream processing tasks can be freely deployed onto any datacenters, provided that the Service Level Agreement (SLA, e.g., quality-of-information) is obeyed. This raises the opportunity, but also a challenge, to explore the inter-datacenter network cost diversities to optimize both VM placement and load balancing towards network cost minimization with guaranteed SLA. In this paper, we first propose a general modeling framework that describes all representative inter-task relationship semantics in BDSP. Based on our novel framework, we then formulate the communication cost minimization problem for BDSP into a mixed-integer linear programming (MILP) problem and prove it to be NP-hard. We then propose a computation-efficient solution based on MILP. The high efficiency of our proposal is validated by extensive simulation based studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data is becoming the world’s new natural resourceand big data use grows quickly. The trend of computingtechnology is that everything is merged into the Internet and‘big data’ are integrated to comprise completeinformation for collective intelligence. With the increasingsize of big data, refining big data themselves to reduce data sizewhile keeping critical data (or useful information) is a newapproach direction. In this paper, we provide a novel dataconsumption model, which separates the consumption of datafrom the raw data, and thus enable cloud computing for bigdata applications. We define a new Data-as-a-Product (DaaP)concept; a data product is a small sized summary of theoriginal data and can directly answer users’ queries. Thus, weseparate the mining of big data into two classes of processingmodules: the refine modules to change raw big data into smallsizeddata products, and application-oriented mining modulesto discover desired knowledge further for applications fromwell-defined data products. Our practices of mining big streamdata, including medical sensor stream data, streams of textdata and trajectory data, demonstrated the efficiency andprecision of our DaaP model for answering users’ queries

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Anti-discrimination law is enforced by a person who has experienced discrimination by lodging a complaint at a statutory equal opportunity agency. The agency is responsible for receiving and resolving discrimination complaints and educating the community; it does not play a role in enforcing the law. The agency relies on ‘carrots’ to encourage voluntary compliance, but it does not wield any ‘sticks’. This is not the case in other areas of law, such as industrial relations, where the Fair Work Ombudsman is charged with enforcing the law — including the prohibition of discrimination in the workplace — and possesses the necessary powers to do so. British academics Hepple, Coussey and Choudhury developed an enforcement pyramid for equal opportunity. This article shows that the model used by the Fair Work Ombudsman reflects what Hepple, Coussey and Choudhury propose, while anti-discrimination law enforcement would be represented as a flat, rectangular structure. The article considers the Fair Work Ombudsman’s discrimination enforcement work to date and identifies some lessons that anti-discrimination law enforcement can learn from its experience.

Relevância:

20.00% 20.00%

Publicador: