800 resultados para Big Five
Resumo:
In public transport, seamless coordinated transfer strengthens the quality of service and attracts ridership. The problem of transfer coordination is sophisticated due to (1) the stochasticity of travel time variability, (2) unavailability of passenger transfer plan. However, the proliferation of Big Data technologies provides a tremendous opportunity to solve these problems. This dissertation enhances passenger transfer quality by offline and online transfer coordination. While offline transfer coordination exploits the knowledge of travel time variability to coordinate transfers, online transfer coordination provides simultaneous vehicle arrivals at stops to facilitate transfers by employing the knowledge of passenger behaviours.
Resumo:
In the 21st Century much of the world will experience untold wealth and prosperity that could not even be conceived only some three centuries before. However as with most, if not all, of the human civilisations, increases in prosperity have accumulated significant environmental impacts that threaten to result in environmentally induced economic decline. A key part of the world’s response to this challenge is to rapidly decarbonise economies around the world, with options to achieve 60-80 per cent improvements (i.e. in the order of Factor 5) in energy and water productivity now available and proven in every sector. Drawing upon the 2009 publication “Factor 5”, in this paper we discuss how to realise such large-scale improvements, involving complexity beyond technical and process innovation. We begin by considering the concept of greenhouse gas stabilisation trajectories that include reducing current greenhouse gas emissions to achieve a ‘peaking’ of global emissions, and subsequent ‘tailing’ of emissions to the desired endpoint in ‘decarbonising’ the economy. Temporal priorities given to peaking and tailing have significant implications for the mix of decarbonising solutions and the need for government and market assistance in causing them to be implemented, requiring careful consideration upfront. Within this context we refer to a number of examples of Factor 5 style opportunities for energy productivity and decarbonisation, and then discuss the need for critical economic contributions to take such success from examples to central mechanisms in decarbonizing the global economy.
Resumo:
Tobacco, says the World Health Organisation (WHO), is “the only legal consumer product that kills when used exactly as intended by the manufacturer.”
Resumo:
It takes a lot of bravery for governments to stand up to big business. But the Gillard government has shown a lot of guts during its tenure. It stood up to Big Tobacco in the battle over plain packaging of tobacco products and has defended individuals and families affected by asbestos. It took on Big Oil in its Clean Energy Future reforms and stood up to the resource barons with the mining tax. The government is now considering Big Pharma - the pharmaceutical industry and their patents – and has launched several inquiries into patent law and pharmaceutical drugs...
Resumo:
Australia has shown outstanding leadership on tobacco control - but it could do more. The next step is surely for the Future Fund to quit its addiction to tobacco investments.
Resumo:
Big data analysis in healthcare sector is still in its early stages when comparing with that of other business sectors due to numerous reasons. Accommodating the volume, velocity and variety of healthcare data Identifying platforms that examine data from multiple sources, such as clinical records, genomic data, financial systems, and administrative systems Electronic Health Record (EHR) is a key information resource for big data analysis and is also composed of varied co-created values. Successful integration and crossing of different subfields of healthcare data such as biomedical informatics and health informatics could lead to huge improvement for the end users of the health care system, i.e. the patients.
Resumo:
Huge amount of data are generated from a variety of information sources in healthcare while the data sources originate from a veracity of clinical information systems and corporate data warehouses. The data derived from the above data sources are used for analysis and trending purposes thus playing an influential role as a real time decision-making tool. The unstructured, narrative data provided by these data sources qualify as healthcare big-data and researchers argue that the application of big-data in healthcare might enable the accountability and efficiency.
Resumo:
Within Australian universities, doctoral research in screen production is growing significantly. Two recent studies have documented both the scale of this research and inconsistencies in the requirements of the degree. These institutional variations, combined with a lack of clarity around appropriate methodologies for academic research through film and television practice, create challenges for students, supervisors, examiners and the overall development of the discipline. This paper will examine five recent doctorates in screen production practice at five different Australian universities. It will look at the nature of the films made, the research questions the candidates were investigating, the new knowledge claims that were produced and the subsequent impact of the research. The various methodologies used will be given particular attention because they help define the nature of the research where film production is a primary research method.
Resumo:
The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.
Resumo:
With the ever increasing amount of eHealth data available from various eHealth systems and sources, Health Big Data Analytics promises enticing benefits such as enabling the discovery of new treatment options and improved decision making. However, concerns over the privacy of information have hindered the aggregation of this information. To address these concerns, we propose the use of Information Accountability protocols to provide patients with the ability to decide how and when their data can be shared and aggregated for use in big data research. In this paper, we discuss the issues surrounding Health Big Data Analytics and propose a consent-based model to address privacy concerns to aid in achieving the promised benefits of Big Data in eHealth.
Resumo:
The past five years have seen many scientific and biological discoveries made through the experimental design of genome-wide association studies (GWASs). These studies were aimed at detecting variants at genomic loci that are associated with complex traits in the population and, in particular, at detecting associations between common single-nucleotide polymorphisms (SNPs) and common diseases such as heart disease, diabetes, auto-immune diseases, and psychiatric disorders. We start by giving a number of quotes from scientists and journalists about perceived problems with GWASs. We will then briefly give the history of GWASs and focus on the discoveries made through this experimental design, what those discoveries tell us and do not tell us about the genetics and biology of complex traits, and what immediate utility has come out of these studies. Rather than giving an exhaustive review of all reported findings for all diseases and other complex traits, we focus on the results for auto-immune diseases and metabolic diseases. We return to the perceived failure or disappointment about GWASs in the concluding section. © 2012 The American Society of Human Genetics.
Resumo:
Arson homicides are rare, representing only two percent of all homicides in Australia each year. In this study, data was collected from the AIC’s National Homicide Monitoring Program (NHMP) to build on previous research undertaken into arson-associated homicides (Davies & Mouzos 2007) and to provide more detailed analysis of cases and offenders. Over the period 1989 to 2010, there were 123 incidents of arson-associated homicide, involving 170 unique victims and 131 offenders. The majority of incidents (63%) occurred in the victim’s home and more than half (57%) of all victims were male. It was found that there has been a 44 percent increase in the number of incidents in the past decade. It is evident that a considerable proportion of the identified arson homicides involved a high degree of premeditation and planning. These homicides were commonly committed by an offender who was well known to the victim, with over half of the victims (56%) specifically targeted by the offender. This paper therefore provides a valuable insight into the nature of arson homicides and signposts areas for further investigation.
Resumo:
Big Datasets are endemic, but they are often notoriously difficult to analyse because of their size, heterogeneity, history and quality. The purpose of this paper is to open a discourse on the use of modern experimental design methods to analyse Big Data in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has wide generality and advantageous inferential and computational properties. In particular, the principled experimental design approach is shown to provide a flexible framework for analysis that, for certain classes of objectives and utility functions, delivers near equivalent answers compared with analyses of the full dataset under a controlled error rate. It can also provide a formalised method for iterative parameter estimation, model checking, identification of data gaps and evaluation of data quality. Finally, it has the potential to add value to other Big Data sampling algorithms, in particular divide-and-conquer strategies, by determining efficient sub-samples.
Resumo:
Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.