774 resultados para data warehouse tuning aggregato business intelligence performance


Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the main challenges in data analytics is that discovering structures and patterns in complex datasets is a computer-intensive task. Recent advances in high-performance computing provide part of the solution. Multicore systems are now more affordable and more accessible. In this paper, we investigate how this can be used to develop more advanced methods for data analytics. We focus on two specific areas: model-driven analysis and data mining using optimisation techniques.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study argues that small and medium-sized enterprises (SMEs) must possess both resources and capabilities at a superior level, and those resources and capabilities must be complementary with one another to achieve superior financial performance. The resources and capabilities of interest are product innovation and marketing. Using data from manufacturing SMEs, the results suggest that product innovation resource–capability complementarity, marketing resource–capability complementarity, and their interaction are positively related to financial performance through product innovation and customer performance. The findings suggest that some SMEs may outperform others not only because they possess a specific individual resource–capability complementarity but also because they create synergy and asset interconnectedness.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

New product innovation has been identified as the key to firms' marketplace success, profit and survival. Yet, the failure rate for new products is high. Because of the high costs associated with new product development, there is considerable theoretical and managerial interest in how to minimize the high failure rates of new products and what separates new product winners from losers. This study focuses on individual level ambidexterity – namely head of the R&D departments' capacity to engage in creativity and attention-to-detail simultaneously, a skill involving different centers of attention, and relying on somewhat incompatible behaviors and processes. The ability to engage in these behaviors simultaneously is seen as being ambidextrous. Drawing from the data of 150 advanced manufacturing firms in India (gathered from one CEO and one head of the R&D department for each firm), the results show that when an individual head of R&D engages heavily only in creativity, too many new, risky ideas may come and when he/she engages heavily only in attention-to-detail, he/she may suffer through a lack of novel ideas. Both approaches limit individual's contribution to enhancing product innovation – financial performance relationship. The results also show that an individual head of R&D needs to engage in high levels creativity and attention-to-detail in the pursuit of enhancing product innovation to achieve superior financial performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article examines variations in performance between fast-growth – the so-called gazelle – firms. Specifically, we investigate how the level of growth affects future profitability and how this relationship is moderated by firm strategy. Hypotheses are developed regarding the moderated growth–profitability relationship and are tested using longitudinal data from a sample of 964 Danish gazelle firms. We find a positive relationship between growth and profitability among gazelle firms. This relationship is moderated, however, by market strategy; it is stronger for firms pursuing a broad market strategy rather than a niche strategy. This study contributes to the current literature by providing a more nuanced view of the growth–profitability relationship and investigating the potential for the future performance of gazelle firms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many organizations realize that increasing amounts of data (“Big Data”) need to be dealt with intelligently in order to compete with other organizations in terms of efficiency, speed and services. The goal is not to collect as much data as possible, but to turn event data into valuable insights that can be used to improve business processes. However, data-oriented analysis approaches fail to relate event data to process models. At the same time, large organizations are generating piles of process models that are disconnected from the real processes and information systems. In this chapter we propose to manage large collections of process models and event data in an integrated manner. Observed and modeled behavior need to be continuously compared and aligned. This results in a “liquid” business process model collection, i.e. a collection of process models that is in sync with the actual organizational behavior. The collection should self-adapt to evolving organizational behavior and incorporate relevant execution data (e.g. process performance and resource utilization) extracted from the logs, thereby allowing insightful reports to be produced from factual organizational data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In today's high-pressure work environment, project managers are often forced to “do more with less.” We argue that this imperative can lead project managers to engage in either high-performance or abusive supervision behaviors. To understand this process, we develop a model and associated propositions linking a project manager's cognitive appraisal of project-related demands to high-performance work practices versus abusive supervision behaviors—both of which impact three project outcomes: stakeholder relationships, people-related project success factors, and employee well-being. We propose that the choice between high-performance work practices and abusive supervision behaviors is moderated by a project manager's personal resources (psychological capital, emotional intelligence, and dark triad personality).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Large complex projects often fail spectacularly in terms of cost overruns and delays; witness the London Olympics and the Airbus A380. In this project, we studied the emotional intelligence (EI) of leadership teams involved in such projects. We collected our data from 370 employees in 40 project teams working on large Australian defense contracts. We asked leadership team members to complete a scale measuring their EI, and project team members to rate the success of the projects. We found it was not the mean score, but the highest EI score in the leadership team that predicted members’ project success ratings.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose The purpose of this paper is to explore the contribution of global business services to improved productivity and economic growth of the world economy, which has gone largely unnoticed in service research. Design/methodology/approach The authors draw on macroeconomic data and industry reports, and link them to the non-ownership-concept in service research and theories of the firm. Findings Business services explain a large share of the growth of the global service economy. The fast growth of business services coincides with shifts from domestic production towards global outsourcing of services. A new wave of global business services are traded across borders and have emerged as important drivers of growth in the world’s service sector. Research limitations/implications This paper advances the understanding of non-ownership services in an increasingly global and specialized post-industrial economy. The paper makes a conceptual contribution supported by descriptive data, but without empirical testing. Originality/value The authors integrate the non-ownership concept and three related economic theories of the firm to explain the role of global business services in driving business performance and the international transformation of service economies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper addresses the problem of predicting the outcome of an ongoing case of a business process based on event logs. In this setting, the outcome of a case may refer for example to the achievement of a performance objective or the fulfillment of a compliance rule upon completion of the case. Given a log consisting of traces of completed cases, given a trace of an ongoing case, and given two or more possible out- comes (e.g., a positive and a negative outcome), the paper addresses the problem of determining the most likely outcome for the case in question. Previous approaches to this problem are largely based on simple symbolic sequence classification, meaning that they extract features from traces seen as sequences of event labels, and use these features to construct a classifier for runtime prediction. In doing so, these approaches ignore the data payload associated to each event. This paper approaches the problem from a different angle by treating traces as complex symbolic sequences, that is, sequences of events each carrying a data payload. In this context, the paper outlines different feature encodings of complex symbolic sequences and compares their predictive accuracy on real-life business process event logs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A candidate gene approach using type I single nucleotide polymorphism (SNP) markers can provide an effective method for detecting genes and gene regions that underlie phenotypic variation in adaptively significant traits. In the absence of available genomic data resources, transcriptomes were recently generated in Macrobrachium rosenbergii to identify candidate genes and markers potentially associated with growth. The characterisation of 47 candidate loci by ABI re-sequencing of four cultured and eight wild samples revealed 342 putative SNPs. Among these, 28 SNPs were selected in 23 growth-related candidate genes to genotype in 200 animals selected for improved growth performance in an experimental GFP culture line in Vietnam. The associations between SNP markers and individual growth performance were then examined. For additive and dominant effects, a total of three exonic SNPs in glycogen phosphorylase (additive), heat shock protein 90 (additive and dominant) and peroxidasin (additive), and a total of six intronic SNPs in ankyrin repeats-like protein (additive and dominant), rolling pebbles (dominant), transforming growth factor-β induced precursor (dominant), and UTP-glucose-1-phosphate uridylyltransferase 2 (dominant) genes showed significant associations with the estimated breeding values in the experimental animals (P =0.001−0.031). Individually, they explained 2.6−4.8 % of the genetic variance (R2=0.026−0.048). This is the first large set of SNP markers reported for M. rosenbergii and will be useful for confirmation of associations in other samples or culture lines as well as having applications in marker-assisted selection in future breeding programs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Australia is a leading user of collaborative procurement methods, which are used to deliver large and complex infrastructure projects. Project alliances, Early Contractor Involvement (ECI), and partnering are typical examples of collaborative procurement models. In order to increase procurement effectiveness and value for money (VfM), clients have adopted various learning strategies for new contract development. However client learning strategies and behaviours have not been systematically analysed before. Therefore, the current paper undertakes a literature review addressing the research question “How can client learning capabilities be effectively understood?”. From the resource-based and dynamic capability perspectives, this paper proposes that the collaborative learning capability (CLC) of clients drives procurement model evolution. Learning routines underpinning CLC carry out exploratory, transformative and exploitative learning phases associated with collaborative project delivery. This learning improves operating routines, and ultimately performance. The conceptualization of CLC and the three sequential learning phases is used to analyse the evidence in the construction management literature. The main contribution of this study is the presentation of a theoretical foundation for future empirical studies to unveil effective learning strategies, which help clients to improve the performance of collaborative projects in the dynamic infrastructure market.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In fisheries managed using individual transferable quotas (ITQs) it is generally assumed that quota markets are well-functioning, allowing quota to flow on either a temporary or permanent basis to those able to make best use of it. However, despite an increasing number of fisheries being managed under ITQs, empirical assessments of the quota markets that have actually evolved in these fisheries remain scarce. The Queensland Coral Reef Fin-Fish Fishery (CRFFF) on the Great Barrier Reef has been managed under a system of ITQs since 2004. Data on individual quota holdings and trades for the period 2004-2012 were used to assess the CRFFF quota market and its evolution through time. Network analysis was applied to assess market structure and the nature of lease-trading relationships. An assessment of market participants’ abilities to balance their quota accounts, i.e., gap analysis, provided insights into market functionality and how this may have changed in the period observed. Trends in ownership and trade were determined, and market participants were identified as belonging to one out of a set of seven generalized types. The emergence of groups such as investors and lease-dependent fishers is clear. In 2011-2012, 41% of coral trout quota was owned by participants that did not fish it, and 64% of total coral trout landings were made by fishers that owned only 10% of the quota. Quota brokers emerged whose influence on the market varied with the bioeconomic conditions of the fishery. Throughout the study period some quota was found to remain inactive, implying potential market inefficiencies. Contribution to this inactivity appeared asymmetrical, with most residing in the hands of smaller quota holders. The importance of transaction costs in the operation of the quota market and the inequalities that may result are discussed in light of these findings