909 resultados para Business Intelligence,Data Warehouse,Sistemi Informativi


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The following project introduces a model of Growth Hacking strategies for business-tobusiness Software-as-a-Service startups that was developed in collaboration with and applied to a Portuguese startup called Liquid. The work addresses digital marketing channels such as content marketing, email marketing, social marketing and selling. Further, the company’s product, pricing strategy, partnerships and website communication are examined. Applying best case practices, competitor benchmarks and interview insights from numerous industry influencers and experts, areas for improvement are deduced and procedures for each of those channels recommended.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comexposium continues to exhibit strong growth through global acquisition of key events. However, the company identified the need to increase the renewal rate of its exhibitors. In order to do so, Comexposium determined marketing automation software could have enormous value. However, the company currently does not have the appropriate data to determine to specific returns the software could provide. Thus, this report focused on assessing the impact of marketing automation on the business performance of a B2B enterprise and the best methods to implement and measure it. The main findings were that the software could be of immense value to Comexposium, if the company is ready to invest in internal resources and take the time to adapt to the changes the tool will incur.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Earthworks tasks aim at levelling the ground surface at a target construction area and precede any kind of structural construction (e.g., road and railway construction). It is comprised of sequential tasks, such as excavation, transportation, spreading and compaction, and it is strongly based on heavy mechanical equipment and repetitive processes. Under this context, it is essential to optimize the usage of all available resources under two key criteria: the costs and duration of earthwork projects. In this paper, we present an integrated system that uses two artificial intelligence based techniques: data mining and evolutionary multi-objective optimization. The former is used to build data-driven models capable of providing realistic estimates of resource productivity, while the latter is used to optimize resource allocation considering the two main earthwork objectives (duration and cost). Experiments held using real-world data, from a construction site, have shown that the proposed system is competitive when compared with current manual earthwork design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies in Computational Intelligence, 616

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large scale distributed data stores rely on optimistic replication to scale and remain highly available in the face of net work partitions. Managing data without coordination results in eventually consistent data stores that allow for concurrent data updates. These systems often use anti-entropy mechanisms (like Merkle Trees) to detect and repair divergent data versions across nodes. However, in practice hash-based data structures are too expensive for large amounts of data and create too many false conflicts. Another aspect of eventual consistency is detecting write conflicts. Logical clocks are often used to track data causality, necessary to detect causally concurrent writes on the same key. However, there is a nonnegligible metadata overhead per key, which also keeps growing with time, proportional with the node churn rate. Another challenge is deleting keys while respecting causality: while the values can be deleted, perkey metadata cannot be permanently removed without coordination. Weintroduceanewcausalitymanagementframeworkforeventuallyconsistentdatastores,thatleveragesnodelogicalclocks(BitmappedVersion Vectors) and a new key logical clock (Dotted Causal Container) to provides advantages on multiple fronts: 1) a new efficient and lightweight anti-entropy mechanism; 2) greatly reduced per-key causality metadata size; 3) accurate key deletes without permanent metadata.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the problem of privacy-preserving proofs on authenticated data, where a party receives data from a trusted source and is requested to prove computations over the data to third parties in a correct and private way, i.e., the third party learns no information on the data but is still assured that the claimed proof is valid. Our work particularly focuses on the challenging requirement that the third party should be able to verify the validity with respect to the specific data authenticated by the source — even without having access to that source. This problem is motivated by various scenarios emerging from several application areas such as wearable computing, smart metering, or general business-to-business interactions. Furthermore, these applications also demand any meaningful solution to satisfy additional properties related to usability and scalability. In this paper, we formalize the above three-party model, discuss concrete application scenarios, and then we design, build, and evaluate ADSNARK, a nearly practical system for proving arbitrary computations over authenticated data in a privacy-preserving manner. ADSNARK improves significantly over state-of-the-art solutions for this model. For instance, compared to corresponding solutions based on Pinocchio (Oakland’13), ADSNARK achieves up to 25× improvement in proof-computation time and a 20× reduction in prover storage space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Healthcare organizations often benefit from information technologies as well as embedded decision support systems, which improve the quality of services and help preventing complications and adverse events. In Centro Materno Infantil do Norte (CMIN), the maternal and perinatal care unit of Centro Hospitalar of Oporto (CHP), an intelligent pre-triage system is implemented, aiming to prioritize patients in need of gynaecology and obstetrics care in two classes: urgent and consultation. The system is designed to evade emergency problems such as incorrect triage outcomes and extensive triage waiting times. The current study intends to improve the triage system, and therefore, optimize the patient workflow through the emergency room, by predicting the triage waiting time comprised between the patient triage and their medical admission. For this purpose, data mining (DM) techniques are induced in selected information provided by the information technologies implemented in CMIN. The DM models achieved accuracy values of approximately 94% with a five range target distribution, which not only allow obtaining confident prediction models, but also identify the variables that stand as direct inducers to the triage waiting times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent empirical evidence has found that employment services and small-business assistance programmes are often successful at getting the unemployed back to work. Â One important concern of policy makers is to decide which of these two programmes is more effective and for whom. Â Using unusually rich (for transition economies) survey data and matching methods, I evaluate the relative effectiveness of these two programmes in Romania. Â While I find that employment services (ES) are, on average, more successful than a small-business assistance programme (SBA), estimation of heterogeneity effects reveals that, compared to non-participation, ES are effective for workers with little access to informal search channels, and SBA works for less-qualified workers and those living in rural areas. Â When comparing ES to SBA, I find that ES tend to be more efficient than SBA for workers without a high-school degree, and that the opposite holds for the more educated workers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper shows that introducing weak property rights in the standard real business cycle (RBC) model can help to explain economic fluctuations. This is motivated by the empirical observation that changes in institutions in emerging markets are related to the evolution of the main macroeconomic variables. In particular, in Mexico, the movements in productivity in the data are associated with changes in institutions, so that we can explain productivity shocks to a large extent as shocks to the quality of institutions. We find that the model with shocks to the degree of protection of property rights only - without technology shocks - can match the second moments in the data for Mexico well. In particular, the fit is better than that of the standard neoclassical model with full protection of property rights regarding the auto-correlations and cross-correlations in the data, especially those related to labor. Viewing productivity shocks as shocks to institutions is also consistent with the stylized fact of falling productivity and non-decreasing labor hours in Mexico over 1980-1994, which is a feature that the neoclassical model cannot match.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Employing the financial accelerator (FA) model of Bernanke, Gertler and Gilchrist (1999) enhanced to include a shock to the FA mechanism, we construct and study shocks to the efficiency of the financial sector in post-war US business cycles. We find that financial shocks are very tightly linked with the onset of recessions, more so than TFP or monetary shocks. The financial shock invariably remains contractionary for sometime after recessions have ended. The shock accounts for a large part of the variance of GDP and is strongly negatively correlated with the external finance premium. Second-moments comparisons across variants of the model with and without a (stochastic) FA mechanism suggests the stochastic FA model helps us understand the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The project aims to achieve two objectives. First, we are analysing the labour market implications of the assumption that firms cannot pay similarly qualified employees differently according to when they joined the firm. For example, if the general situation for workers improves, a firm that seeks to hire new workers may feel it has to pay more to new hires. However, if the firm must pay the same wage to new hires and incumbents due to equal treatment, it would either have to raise the wage of the incumbents, or offer new workers a lower wage than the firm would do otherwise. This is very different from the standard assumption in economic analysis that firms are free to treat newly hired workers independently of existing hires. Second, we will use detailed data on individual wages to try to gauge whether (and to what extent) equity is a feature of actual labour markets. To investigate this, we are using two matched employer-employee panel datasets, one from Portugal and the other from Brazil. These unique datasets provide objective records on millions of workers and their firms over a long period of time, so that we can identify which firms employ which workers at each time. The datasets also include a large number of firm and worker variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider a producer who faces uninsurable business risks due to incomplete spanning of asset markets over stochastic goods market outcomes, and examine how the presence of the uninsurable business risks affects the producer's optimal pricing and production behaviours. Three key (inter-related) results we find are: (1) optimal prices in goods markets comprise ‘markup’ to the extent of market power and ‘premium’ by shadow price of the risks; (2) price inertia as we observe in data can be explained by a joint work of risk neutralization motive and marginal cost equalization condition; (3) the relative responsiveness of risk neutralization motive and marginal cost equalization at optimum is central to the cyclical variation of markups, providing a consistent explanation for procyclical and countercyclical movements. By these results, the proposed theory of producer leaves important implications both micro and macro, and both empirical and theoretical.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is inspired by articles in the last decade or so that have argued for more attention to theory, and to empirical analysis, within the well-known, and long-lasting, contingency framework for explaining the organisational form of the firm. Its contribution is to extend contingency analysis in three ways: (a) by empirically testing it, using explicit econometric modelling (rather than case study evidence) involving estimation by ordered probit analysis; (b) by extending its scope from large firms to SMEs; (c) by extending its applications from Western economic contexts, to an emerging economy context, using field work evidence from China. It calibrates organizational form in a new way, as an ordinal dependent variable, and also utilises new measures of familiar contingency factors from the literature (i.e. Environment, Strategy, Size and Technology) as the independent variables. An ordered probit model of contingency was constructed, and estimated by maximum likelihood, using a cross section of 83 private Chinese firms. The probit was found to be a good fit to the data, and displayed significant coefficients with plausible interpretations for key variables under all the four categories of contingency analysis, namely Environment, Strategy, Size and Technology. Thus we have generalised the contingency model, in terms of specification, interpretation and applications area.