348 resultados para Business Intelligence, ETL, Data Warehouse, Metadati, Reporting


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although Human papillomavirus (HPV) is a common sexually transmitted infection, there is limited knowledge of HPV with ethnic/racial minorities experiencing the greatest disparities. This cross-sectional study used the most recent available data from the California Health Interview Survey to assess disparities in awareness and knowledge of HPV among ethnically/racially diverse women varying in generation status (N = 19,928). Generation status emerged as a significant predictor of HPV awareness across ethnic/racial groups, with 1st generation Asian-Americans and 1st and 2nd generation Latinas reporting the least awareness when compared to same-generation White counterparts. Also, generation status was a significant predictor of HPV knowledge, but only for Asian-Americans. Regardless of ethnicity/race, 1st generation women reported lowest HPV knowledge when compared to 2nd and 3rd generation women. These findings underscore the importance of looking at differences within and across ethnic/racial groups to identify subgroups at greatest risk for poor health outcomes. In particular, we found generation status to be an important yet often overlooked factor in the identification of health disparities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

National pride is both an important and understudied topic with respect to economic behaviour, hence this thesis investigates whether: 1) there is a "light" side of national pride through increased compliance, and a "dark" side linked to exclusion; 2) successful priming of national pride is linked to increased tax compliance; and 3) East German post-reunification outmigration is related to loyalty. The project comprises three related empirical studies, analysing evidence from a large, aggregated, international survey dataset; a tax compliance laboratory experiment combining psychological priming with measurement of heart rate variability; and data collected after the fall of the Berlin Wall (a situation approximating a natural experiment).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines Interim Financial Reporting disclosure compliance and associated factors for listed firms in Asia-Pacific countries: Australia, Hong Kong, Malaysia, Singapore, the Philippines, Thailand, and Vietnam. Employing disclosure theory (in the context of information economics), with the central premise being that manager' trade-off costs and benefits relating to disclosure, the factors influencing the variation in interim reporting disclosure compliance are examined. Using researcher-constructed disclosure indices and regression modelling, the results reveal significant cross-country variation in interim reporting disclosure compliance, with higher compliance associated with IFRS adoption, audit review, quarterly reporting (rather than six-monthly) and shorter reporting lags.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of predicting the outcome of an ongoing case of a business process based on event logs. In this setting, the outcome of a case may refer for example to the achievement of a performance objective or the fulfillment of a compliance rule upon completion of the case. Given a log consisting of traces of completed cases, given a trace of an ongoing case, and given two or more possible out- comes (e.g., a positive and a negative outcome), the paper addresses the problem of determining the most likely outcome for the case in question. Previous approaches to this problem are largely based on simple symbolic sequence classification, meaning that they extract features from traces seen as sequences of event labels, and use these features to construct a classifier for runtime prediction. In doing so, these approaches ignore the data payload associated to each event. This paper approaches the problem from a different angle by treating traces as complex symbolic sequences, that is, sequences of events each carrying a data payload. In this context, the paper outlines different feature encodings of complex symbolic sequences and compares their predictive accuracy on real-life business process event logs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Big data analysis in healthcare sector is still in its early stages when comparing with that of other business sectors due to numerous reasons. Accommodating the volume, velocity and variety of healthcare data Identifying platforms that examine data from multiple sources, such as clinical records, genomic data, financial systems, and administrative systems Electronic Health Record (EHR) is a key information resource for big data analysis and is also composed of varied co-created values. Successful integration and crossing of different subfields of healthcare data such as biomedical informatics and health informatics could lead to huge improvement for the end users of the health care system, i.e. the patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital technology offers enormous benefits (economic, quality of design and efficiency in use) if adopted to implement integrated ways of representing the physical world in a digital form. When applied across the full extent of the built and natural world, it is referred to as the Digital Built Environment (DBE) and encompasses a wide range of approaches and technology initiatives, all aimed at the same end goal: the development of a virtual world that sufficiently mirrors the real world to form the basis for the smart cities of the present and future, enable efficient infrastructure design and programmed maintenance, and create a new foundation for economic growth and social well-being through evidence-based analysis. The creation of a National Data Policy for the DBE will facilitate the creation of additional high technology industries in Australia; provide Governments, industries and citizens with greater knowledge of the environments they occupy and plan; and offer citizen-driven innovations for the future. Australia has slipped behind other nations in the adoption and execution of Building Information Modelling (BIM) and the principal concern is that the gap is widening. Data driven innovation added $67 billion to the Australian economy in 20131. Strong open data policy equates to $16 billion in new value2. Australian Government initiatives such as the Digital Earth inspired “National Map” offer a platform and pathway to embrace the concept of a “BIM Globe”, while also leveraging unprecedented growth in open source / open data collaboration. Australia must address the challenges by learning from international experiences—most notably the UK and NZ—and mandate the use of BIM across Government, extending the Framework for Spatial Data Foundation to include the Built Environment as a theme and engaging collaboration through a “BIM globe” metaphor. This proposed DBE strategy will modernise the Australian urban planning and the construction industry. It will change the way we develop our cities by fundamentally altering the dynamics and behaviours of the supply chains and unlocking new and more efficient ways of collaborating at all stages of the project life-cycle. There are currently two major modelling approaches that contribute to the challenge of delivering the DBE. Though these collectively encompass many (often competing) approaches or proprietary software systems, all can be categorised as either: a spatial modelling approach, where the focus is generally on representing the elements that make up the world within their geographic context; and a construction modelling approach, where the focus is on models that support the life cycle management of the built environment. These two approaches have tended to evolve independently, addressing two broad industry sectors: the one concerned with understanding and managing global and regional aspects of the world that we inhabit, including disciplines concerned with climate, earth sciences, land ownership, urban and regional planning and infrastructure management; the other is concerned with planning, design, construction and operation of built facilities and includes architectural and engineering design, product manufacturing, construction, facility management and related disciplines (a process/technology commonly known as Building Information Modelling, BIM). The spatial industries have a strong voice in the development of public policy in Australia, while the construction sector, which in 2014 accounted for around 8.5% of Australia’s GDP3, has no single voice and because of its diversity, is struggling to adapt to and take advantage of the opportunity presented by these digital technologies. The experience in the UK over the past few years has demonstrated that government leadership is very effective in stimulating industry adoption of digital technologies by, on the one hand, mandating the use of BIM on public procurement projects while at the same time, providing comparatively modest funding to address the common issues that confront the industry in adopting that way of working across the supply chain. The reported result has been savings of £840m in construction costs in 2013/14 according to UK Cabinet Office figures4. There is worldwide recognition of the value of bringing these two modelling technologies together. Australia has the expertise to exercise leadership in this work, but it requires a commitment by government to recognise the importance of BIM as a companion methodology to the spatial technologies so that these two disciplinary domains can cooperate in the development of data policies and information exchange standards to smooth out common workflows. buildingSMART Australasia, SIBA and their academic partners have initiated this dialogue in Australia and wish to work collaboratively, with government support and leadership, to explore the opportunities open to us as we develop an Australasian Digital Built Environment. As part of that programme, we must develop and implement a strategy to accelerate the adoption of BIM processes across the Australian construction sector while at the same time, developing an integrated approach in concert with the spatial sector that will position Australia at the forefront of international best practice in this area. Australia and New Zealand cannot afford to be on the back foot as we face the challenges of rapid urbanisation and change in the global environment. Although we can identify some exemplary initiatives in this area, particularly in New Zealand in response to the need for more resilient urban development in the face of earthquake threats, there is still much that needs to be done. We are well situated in the Asian region to take a lead in this challenge, but we are at imminent risk of losing the initiative if we do not take action now. Strategic collaboration between Governments, Industry and Academia will create new jobs and wealth, with the potential, for example, to save around 20% on the delivery costs of new built assets, based on recent UK estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years accounting education has seen numerous changes to the way financial accounting is taught. These changes reflect the demands of an ever-changing business world, opportunities created by new technology and instructional technologies, and an increased understanding of how students learn. The foundation of Financial Accounting is based on a number of unique principles and innovations in accounting education. The objective of Financial Accounting is to provide students with an understanding of those concepts that are fundamental to the preparation and use of accounting information. Most students will forget procedural details within a short period of time. On the other hand, concepts, if well taught, should be remembered for a lifetime. Concepts are especially important in a world where the details are constantly changing. Students learn best when they are actively engaged. The overriding pedagogical objective of Financial Accounting is to provide students with continual opportunities for active learning. One of the best tools for active learning is strategically placed questions. Discussions are framed by questions, often beginning with rhetorical questions and ending with review questions, and our analytical devices, called decision-making toolkits, use key questions to demonstrate the purpose of each.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid increase in migration into host countries and the growth of immigrant-owned business enterprises has revitalized research on ethnic business. Does micro (individual)-level social capital, or meso (group)-level location within the ethnic enclave lead to immigrant business growth? Or do you need both? We analyze quantitative data collected from 110 Chinese restaurants in Australia, a major host country. At the micro level we find that coethnic (same ethnic group) networks are critical to the growth of an immigrant entrepreneur's business, particularly in the early years. But non-coethnic (different ethnic group) social capital only has a positive impact on business growth for immigrant businesses outside the ethnic enclave. Our findings are relevant, not only to host-country policymakers, but also for future immigrant business owners and ethnic community leaders trying to better understand how to promote healthy communities and sustainable economic growth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The present study evaluates the prehospital care of paediatric burn patients in Queensland (QLD). As first aid (FA) treatment has been shown to affect burn progression and outcome, the FA treatment and the risk of associated hypothermia in paediatric patients were specifically examined in the context of paramedic management of burn patients. METHODS: Data were retrospectively collected from electronic ambulance response forms (eARFs) for paediatric burn patients (0-5 years) who were attended by Queensland Ambulance Service (QAS) from 2008 to 2010. Data were collected from 117 eARFs of incidents occurring within the Brisbane, Townsville and Cairns regions. RESULTS: Initial FA measures were recorded in 77.8% of cases, with cool running water FA administered in 56.4% of cases. The duration of FA was recorded in 29.9% of reports. The duration of FA was significantly shorter for patients in Northern QLD (median = 10 min, n = 10) compared with Brisbane (median = 15 min, n = 18), P = 0.005. Patient temperatures were recorded significantly more often in Brisbane than in other regions (P = 0.041); however, in total, only 24.8% of all patients had documented temperature readings. Of these, six (5%) were recorded as having temperatures ≤ 36.0°C. Burnaid(TM) was the most commonly used dressing and was applied to 55.6% of all patients; however, it was applied with a variety of different outer dressings. Brisbane paramedics applied Burnaid significantly less often (44.3%) compared with paramedics from Northern QLD (72.7%) and Far Northern QLD (60.9%), P = 0.025. CONCLUSIONS: Despite FA and patient temperatures being important prognostic factors for burn patients, paramedic documentation of these was often incomplete, and there was no consistent use of burns dressings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores consumer behavioural patterns on a magazine website. By using a unique dataset of real-life click stream data from 295 magazine website visitors, individual sessions are grouped according to the different sections visited on the websites. Interesting behavioural patterns are noted: most importantly, 86 % of all sessions only visit the blogs. This means that the visitors are not exposed to any editorial content at all, and choose to avoid also commercial contents. Sessions visiting editorial content, commercial content or social media links are very few in numbers (each 1 per cent or less of the sessions), thus giving only very limited support to the magazine business model. We noted that consumer behaviour on the magazine website seems to be very goal-oriented and instrumental, rather than exploratory and ritualized. This paper contributes to the current knowledge of media management by shedding light on consumer behaviour on media websites, and opening up the challenges with current media business models. From a more practical perspective, our data questions the general assumption of online platforms as supporter of the print business.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Schizophrenia is associated with lower pre-morbid intelligence (IQ) in addition to (pre-morbid) cognitive decline. Both schizophrenia and IQ are highly heritable traits. Therefore, we hypothesized that genetic variants associated with schizophrenia, including copy number variants (CNVs) and a polygenic schizophrenia (risk) score (PSS), may influence intelligence. Method IQ was estimated with the Wechsler Adult Intelligence Scale (WAIS). CNVs were determined from single nucleotide polymorphism (SNP) data using the QuantiSNP and PennCNV algorithms. For the PSS, odds ratios for genome-wide SNP data were calculated in a sample collected by the Psychiatric Genome-Wide Association Study (GWAS) Consortium (8690 schizophrenia patients and 11 831 controls). These were used to calculate individual PSSs in our independent sample of 350 schizophrenia patients and 322 healthy controls. Results Although significantly more genes were disrupted by deletions in schizophrenia patients compared to controls (p = 0.009), there was no effect of CNV measures on IQ. The PSS was associated with disease status (R 2 = 0.055, p = 2.1 × 10 -7) and with IQ in the entire sample (R 2 = 0.018, p = 0.0008) but the effect on IQ disappeared after correction for disease status. Conclusions Our data suggest that rare and common schizophrenia-associated variants do not explain the variation in IQ in healthy subjects or in schizophrenia patients. Thus, reductions in IQ in schizophrenia patients may be secondary to other processes related to schizophrenia risk. © Cambridge University Press 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article describes a maximum likelihood method for estimating the parameters of the standard square-root stochastic volatility model and a variant of the model that includes jumps in equity prices. The model is fitted to data on the S&P 500 Index and the prices of vanilla options written on the index, for the period 1990 to 2011. The method is able to estimate both the parameters of the physical measure (associated with the index) and the parameters of the risk-neutral measure (associated with the options), including the volatility and jump risk premia. The estimation is implemented using a particle filter whose efficacy is demonstrated under simulation. The computational load of this estimation method, which previously has been prohibitive, is managed by the effective use of parallel computing using graphics processing units (GPUs). The empirical results indicate that the parameters of the models are reliably estimated and consistent with values reported in previous work. In particular, both the volatility risk premium and the jump risk premium are found to be significant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Big Data and predictive analytics have received significant attention from the media and academic literature throughout the past few years, and it is likely that these emerging technologies will materially impact the mining sector. This short communication argues, however, that these technological forces will probably unfold differently in the mining industry than they have in many other sectors because of significant differences in the marginal cost of data capture and storage. To this end, we offer a brief overview of what Big Data and predictive analytics are, and explain how they are bringing about changes in a broad range of sectors. We discuss the “N=all” approach to data collection being promoted by many consultants and technology vendors in the marketplace but, by considering the economic and technical realities of data acquisition and storage, we then explain why a “n « all” data collection strategy probably makes more sense for the mining sector. Finally, towards shaping the industry’s policies with regards to technology-related investments in this area, we conclude by putting forward a conceptual model for leveraging Big Data tools and analytical techniques that is a more appropriate fit for the mining sector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over recent years, the focus in road safety has shifted towards a greater understanding of road crash serious injuries in addition to fatalities. Police reported crash data are often the primary source of crash information; however, the definition of serious injury within these data is not consistent across jurisdictions and may not be accurately operationalised. This study examined the linkage of police-reported road crash data with hospital data to explore the potential for linked data to enhance the quantification of serious injury. Data from the Queensland Road Crash Database (QRCD), the Queensland Hospital Admitted Patients Data Collection (QHAPDC), Emergency Department Information System (EDIS), and the Queensland Injury Surveillance Unit (QISU) for the year 2009 were linked. Nine different estimates of serious road crash injury were produced. Results showed that there was a large amount of variation in the estimates of the number and profile of serious road crash injuries depending on the definition or measure used. The results also showed that as the definition of serious injury becomes more precise the vulnerable road users become more prominent. These results have major implications in terms of how serious injuries are identified for reporting purposes. Depending on the definitions used, the calculation of cost and understanding of the impact of serious injuries would vary greatly. This study has shown how data linkage can be used to investigate issues of data quality. It has also demonstrated the potential improvements to the understanding of the road safety problem, particularly serious injury, by conducting data linkage.