931 resultados para Future value prediction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-07

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We provide theory and evidence to complement Choi's [RFS, 2013] important new insights on the returns to equity in `value' firms. We show that higher future earnings growth ameliorates the value-reducing effect of leverage and, because the market for earnings is incomplete, reduces the earnings-risk sensitivity of the default option. Ceteris paribus, a levered firm with low (high) earnings growth is more sensitive to the first (second) of these effects thus generating higher (lower) expected returns. We demonstrate this by modeling equity as an Asian-style call option on net earnings and find significant empirical support for our hypotheses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-07

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O projeto desenvolvido tem como objetivo principal a melhoria da eficiência na prestação de serviços de reparação de chapa e pintura na Caetano Auto Colisão, através da aplicação de ferramentas associadas à filosofia Lean. Apesar das ferramentas e técnicas lean estarem bem exploradas nas empresas de produção e manufatura, o mesmo não se verifica em relação às empresas da área dos serviços. O Value Stream Mapping é uma ferramenta lean que consiste no mapeamento do fluxo de materiais e informação necessários para a realização das atividades (que acrescentam e não acrescentam valor), desempenhadas pelos colaboradores, fornecedores e distribuidores, desde a obtenção do pedido do cliente até à entrega final do serviço. Através desta ferramenta é possível identificar as atividades que não acrescentam valor para o processo e propor medidas de melhoria que resultem na eliminação ou redução das mesmas. Com base neste conceito, foi realizado o mapeamento do processo de prestação de serviços de chapa e pintura e identificados os focos de ineficiência. A partir desta análise foram sugeridas melhorias que têm como objetivo atingir o estado futuro proposto assim como tornar o processo mais eficiente. Duas destas melhorias passaram pela implementação dos 5S na sala das tintas e pela elaboração de um relatório A3 para o centro de lavagens. O projeto realizado permitiu o estudo de um problema real numa empresa de serviços, bem como a proposta de um conjunto de melhorias que a médio prazo se espera virem a contribuir para a melhoria da eficiência na prestação de serviços de chapa e pintura.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

According to law number 12.715/2012, Brazilian government instituted guidelines for a program named Inovar-Auto. In this context, energy efficiency is a survival requirement for Brazilian automotive industry from September 2016. As proposed by law, energy efficiency is not going to be calculated by models only. It is going to be calculated by the whole universe of new vehicles registered. In this scenario, the composition of vehicles sold in market will be a key factor on profits of each automaker. Energy efficiency and its consequences should be taken into consideration in all of its aspects. In this scenario, emerges the following question: which is the efficiency curve of one automaker for long term, allowing them to adequate to rules, keep balancing on investment in technologies, increasing energy efficiency without affecting competitiveness of product lineup? Among several variables to be considered, one can highlight the analysis of manufacturing costs, customer value perception and market share, which characterizes this problem as a multi-criteria decision-making. To tackle the energy efficiency problem required by legislation, this paper proposes a framework of multi-criteria decision-making. The proposed framework combines Delphi group and Analytic Hierarchy Process to identify suitable alternatives for automakers to incorporate in main Brazilian vehicle segments. A forecast model based on artificial neural networks was used to estimate vehicle sales demand to validate expected results. This approach is demonstrated with a real case study using public vehicles sales data of Brazilian automakers and public energy efficiency data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Sweet cherries (Prunus avium L.) are a nutritious fruit which are rich in polyphenols and have high antioxidant potential. Most sweet cherries are consumed fresh and a small proportion of the total sweet cherries production is value added to make processed food products. Sweet cherries are highly perishable fruit with a short harvest season, therefore extensive preservation and processing methods have been developed for the extension of their shelf-life and distribution of their products. Scope and Approach In this review, the main physicochemical properties of sweet cherries, as well as bioactive components and their determination methods are described. The study emphasises the recent progress of postharvest technology, such as controlled/modified atmosphere storage, edible coatings, irradiation, and biological control agents, to maintain sweet cherries for the fresh market. Valorisations of second-grade sweet cherries, as well as trends for the diversification of cherry products for future studies are also discussed. Key Findings and Conclusions Sweet cherry fruit have a short harvest period and marketing window. The major loss in quality after harvest include moisture loss, softening, decay and stem browning. Without compromising their eating quality, the extension in fruit quality and shelf-life for sweet cherries is feasible by means of combination of good handling practice and applications of appropriate postharvest technology. With the drive of health-food sector, the potential of using second class cherries including cherry stems as a source of bioactive compound extraction is high, as cherry fruit is well-known for being rich in health-promoting components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, the luxury market has entered a period of very modest growth, which has been dubbed the ‘new normal’, where varying tourist flows, currency fluctuations, and shifted consumer tastes dictate the terms. The modern luxury consumer is a fickle mistress. Especially millennials – people born in the 1980s and 1990s – are the embodiment of this new form of demanding luxury consumer with particular tastes and values. Modern consumers, and specifically millennials, want experiences and free time, and are interested in a brand’s societal position and environmental impact. The purpose of this thesis is to investigate what the luxury value perceptions of millennials in higher education are in Europe, seeing as many of the most prominent luxury goods companies in the world originate from Europe. Perceived luxury value is herein examined from the individual’s perspective. As values and value perceptions are complex constructs, using qualitative research methods is justifiable. The data for thesis has been gathered by means of a group interview. The interview participants all study hospitality management in a private college, and each represent a different nationality. Cultural theories and research on luxury and luxury values provide the scientific foundation for this thesis, and a multidimensional luxury value model is used as a theoretical tool in sorting and analyzing the data. The results show that millennials in Europe value much more than simply modern and hard luxury. Functional, financial, individual, and social aspects are all present in perceived luxury value, but some more in a negative sense than others. Conspicuous, status-seeking consumption is mostly frowned upon, as is the consumption of luxury goods for the sake of satisfying social requisites and peer pressure. Most of the positive value perceptions are attributed to the functional dimension, as luxury products are seen to come with a promise of high quality and reliability, which justifies any price premiums. Ecological and ethical aspects of luxury are already a contemporary trend, but perceived even more as an important characteristic of luxury in the future. Most importantly, having time is fundamental. Depending on who is asked, luxury can mean anything, just as much as it can mean nothing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis looks at how ‘community archaeology’ ideals may influence an inclusive approach to Indigenous heritage management, ensuring Indigenous community power over processes to identify both past and present values of Country. Community archaeology was acclaimed by research archaeologists over a decade ago as a distinctive approach with its own set of practices to incorporate the local community’s perspectives of its past and current associations with place. A core feature of this approach in Australia is the major role the Indigenous community has in decisions about its heritage. Concurrently, considerable concern was being expressed that Indigenous heritage was not sufficiently addressed in environmental impact assessment processes ahead of development. Seen as absent from the process was the inclusion of Indigenous knowledge about both the pre- and post-contact story as well as any scientific advance in understanding an area’s Indigenous history. This research examines these contrasting perspectives seeking to understand the ideals of community archaeology and its potential to value all aspects of Indigenous heritage and so benefit the relevant community. The ideals of community archaeology build on past community collaborations in Australia and also respond to more recent societal recognition of Indigenous rights, reflected in more ethically inclusive planning and heritage statutes. Indigenous communities expressed the view that current systems are still not meeting these policy commitments to give them control over their heritage. This research has examined the on-the-ground reality of heritage work on the outskirts of Canberra and Melbourne. The case studies compare Victorian and ACT heritage management processes across community partnerships with public land managers, and examine how pre-development surveys operate. I conclude that considerable potential for achieving community archaeology ideals exists, and that they are occasionally partially realised, however barriers continue. In essence, the archaeological model persists despite a community archaeology approach requiring a wider set of skills to ensure a comprehensive engagement with an Indigenous community. Other obstacles in the current Indigenous heritage management system include a lack of knowledge and communication about national standards for heritage processes in government agencies and heritage consultants; the administrative framework that can result in inertia or silos between relevant agencies; and funding timeframes that limit possibilities for long-term strategic programs for early identification and management planning for Indigenous heritage. Also, Indigenous communities have varying levels of authority to speak for how their heritage should be managed, yet may not have the resources to do so. This thesis suggests ways to breach these barriers to achieve more inclusive Indigenous heritage management based on community archaeology principles. Policies for a greater acknowledgement of the Indigenous community’s authority to speak for Country; processes that enable and early and comprehensive ‘mapping’ of Country, and long-term resourcing of communities, may have been promised before. In this research I suggest ways to realise such goals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, wind wave prediction and analysis in the Southern Caspian Sea are surveyed. Because of very much importance and application of this matter in reducing vital and financial damages or marine activities, such as monitoring marine pollution, designing marine structure, shipping, fishing, offshore industry, tourism and etc, gave attention by some marine activities. In this study are used the Caspian Sea topography data that are extracted from the Caspian Sea Hydrography map of Iran Armed Forces Geographical Organization and the I 0 meter wind field data that are extracted from the transmitted GTS synoptic data of regional centers to Forecasting Center of Iran Meteorological Organization for wave prediction and is used the 20012 wave are recorded by the oil company's buoy that was located at distance 28 Kilometers from Neka shore for wave analysis. The results of this research are as follows: - Because of disagreement between the prediction results of SMB method in the Caspian sea and wave data of the Anzali and Neka buoys. The SMB method isn't able to Predict wave characteristics in the Southern Caspian Sea. - Because of good relativity agreement between the WAM model output in the Caspian Sea and wave data of the Anzali buoy. The WAM model is able to predict wave characteristics in the southern Caspian Sea with high relativity accuracy. The extreme wave height distribution function for fitting to the Southern Caspian Sea wave data is obtained by determining free parameters of Poisson-Gumbel function through moment method. These parameters are as below: A=2.41, B=0.33. The maximum relative error between the estimated 4-year return value of the Southern Caspian Sea significant wave height by above function with the wave data of Neka buoy is about %35. The 100-year return value of the Southern Caspian Sea significant height wave is about 4.97 meter. The maximum relative error between the estimated 4-year return value of the Southern Caspian Sea significant wave height by statistical model of peak over threshold with the wave data of Neka buoy is about %2.28. The parametric relation for fitting to the Southern Caspian Sea frequency spectra is obtained by determining free parameters of the Strekalov, Massel and Krylov etal_ multipeak spectra through mathematical method. These parameters are as below: A = 2.9 B=26.26, C=0.0016 m=0.19 and n=3.69. The maximum relative error between calculated free parameters of the Southern Caspian Sea multipeak spectrum with the proposed free parameters of double-peaked spectrum by Massel and Strekalov on the experimental data from the Caspian Sea is about 36.1 % in spectrum energetic part and is about 74M% in spectrum high frequency part. The peak over threshold waverose of the Southern Caspian Sea shows that maximum occurrence probability of wave height is relevant to waves with 2-2.5 meters wave fhe error sources in the statistical analysis are mainly due to: l) the missing wave data in 2 years duration through battery discharge of Neka buoy. 2) the deportation %15 of significant height annual mean in single year than long period average value that is caused by lack of adequate measurement on oceanic waves, and the error sources in the spectral analysis are mainly due to above- mentioned items and low accurate of the proposed free parameters of double-peaked spectrum on the experimental data from the Caspian Sea.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phosphorylation is amongst the most crucial and well-studied post-translational modifications. It is involved in multiple cellular processes which makes phosphorylation prediction vital for understanding protein functions. However, wet-lab techniques are labour and time intensive. Thus, computational tools are required for efficiency. This project aims to provide a novel way to predict phosphorylation sites from protein sequences by adding flexibility and Sezerman Grouping amino acid similarity measure to previous methods, as discovering new protein sequences happens at a greater rate than determining protein structures. The predictor – NOPAY - relies on Support Vector Machines (SVMs) for classification. The features include amino acid encoding, amino acid grouping, predicted secondary structure, predicted protein disorder, predicted protein flexibility, solvent accessibility, hydrophobicity and volume. As a result, we have managed to improve phosphorylation prediction accuracy for Homo sapiens by 3% and 6.1% for Mus musculus. Sensitivity at 99% specificity was also increased by 6% for Homo sapiens and for Mus musculus by 5% on independent test sets. In this study, we have managed to increase phosphorylation prediction accuracy for Homo sapiens and Mus musculus. When there is enough data, future versions of the software may also be able to predict other organisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIMS: Renal dysfunction is a powerful predictor of adverse outcomes in patients hospitalized for acute coronary syndrome. Three new glomerular filtration rate (GFR) estimating equations recently emerged, based on serum creatinine (CKD-EPIcreat), serum cystatin C (CKD-EPIcyst) or a combination of both (CKD-EPIcreat/cyst), and they are currently recommended to confirm the presence of renal dysfunction. Our aim was to analyse the predictive value of these new estimated GFR (eGFR) equations regarding mid-term mortality in patients with acute coronary syndrome, and compare them with the traditional Modification of Diet in Renal Disease (MDRD-4) formula. METHODS AND RESULTS: 801 patients admitted for acute coronary syndrome (age 67.3±13.3 years, 68.5% male) and followed for 23.6±9.8 months were included. For each equation, patient risk stratification was performed based on eGFR values: high-risk group (eGFR<60ml/min per 1.73m2) and low-risk group (eGFR⩾60ml/min per 1.73m2). The predictive performances of these equations were compared using area under each receiver operating characteristic curves (AUCs). Overall risk stratification improvement was assessed by the net reclassification improvement index. The incidence of the primary endpoint was 18.1%. The CKD-EPIcyst equation had the highest overall discriminate performance regarding mid-term mortality (AUC 0.782±0.20) and outperformed all other equations (ρ<0.001 in all comparisons). When compared with the MDRD-4 formula, the CKD-EPIcyst equation accurately reclassified a significant percentage of patients into more appropriate risk categories (net reclassification improvement index of 11.9% (p=0.003)). The CKD-EPIcyst equation added prognostic power to the Global Registry of Acute Coronary Events (GRACE) score in the prediction of mid-term mortality. CONCLUSION: The CKD-EPIcyst equation provides a novel and improved method for assessing the mid-term mortality risk in patients admitted for acute coronary syndrome, outperforming the most widely used formula (MDRD-4), and improving the predictive value of the GRACE score. These results reinforce the added value of cystatin C as a risk marker in these patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main aim of this book is to consider how the sales function informs business strategy. Although there are a number of books available that address how to manage the sales team tactically, this text addresses how sales can help organizations to become more customer oriented. Many organizations are facing escalating costs and a growth in customer power, which makes it necessary to allocate resources more strategically. The sales function can provide critical customer and market knowledge to help inform both innovation and marketing. Sales are responsible for building customer knowledge, networking both internally and externally to help create additional customer value, as well as the more traditional role of managing customer relationships and selling. The text considers how sales organizations are responding to increasing competition, more demanding customers and a more complex selling environment. We identify many of the challenges facing organisations today and offers discussions of some of the possible solutions. This book considers the changing nature of sales and how activities can be aligned within the organization, as well as marketing sensing, creating customer focus and the role of sales leadership. The text will include illustrations (short case studies) provided by a range of successful organizations operating in a number of industries. Sales and senior management play an important role in ensuring that the sales teams' activities are aligned to business strategy and in creating an environment to allow salespeople to be more successful in developing new business opportunities and building long-term profitable business relationships. One of the objectives of this book is to consider how conventional thinking has changed in the last five years and integrate it with examples from sales practice to provide a more complete picture of the role of sales within the modern organization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most economic transactions nowadays are due to the effective exchange of information in which digital resources play a huge role. New actors are coming into existence all the time, so organizations are facing difficulties in keeping their current customers and attracting new customer segments and markets. Companies are trying to find the key to their success and creating superior customer value seems to be one solution. Digital technologies can be used to deliver value to customers in ways that extend customers’ normal conscious experiences in the context of time and space. By creating customer value, companies can gain the increased loyalty of existing customers and better ways to serve new customers effectively. Based on these assumptions, the objective of this study was to design a framework to enable organizations to create customer value in digital business. The research was carried out as a literature review and an empirical study, which consisted of a web-based survey and semi-structured interviews. The data from the empirical study was analyzed as mixed research with qualitative and quantitative methods. These methods were used since the object of the study was to gain deeper understanding about an existing phenomena. Therefore, the study used statistical procedures and value creation is described as a phenomenon. The framework was designed first based on the literature and updated based on the findings from the empirical study. As a result, relationship, understanding the customer, focusing on the core product or service, the product or service quality, incremental innovations, service range, corporate identity, and networks were chosen as the top elements of customer value creation. Measures for these elements were identified. With the measures, companies can manage the elements in value creation when dealing with present and future customers and also manage the operations of the company. In conclusion, creating customer value requires understanding the customer and a lot of information sharing, which can be eased by digital resources. Understanding the customer helps to produce products and services that fulfill customers’ needs and desires. This could result in increased sales and make it easier to establish efficient processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fish meat has a particular chemical composition which gives its high nutritional value. However, this food is identified for being highly perishable and this aspect is often named as a barrier to fish consumption. The southwestern Paraná region, parallel to the country's reality, it is characterized by low fish consumption; and one of the strategies aimed at increasing the consumption of this important protein source is encouraging the production of other species besides tilapia. Within this context, it is necessary to know about the meat characteristics. In this sense, the objective of this study was to evaluate the technological potential of pacu, grass carp and catfish species. To do so, at first, it was discussed the chemical and biometric assessment under two distinct descriptive statistical methods, of the three species; and it was also evaluated the discriminating capacity of the study. In a second moment, an evaluation of effects done by two different processes of washing (acid and alkaline) regarding the removal of nitrogen compounds, pigments and the emulsifying ability of the proteins contained in the protein base obtained. Finally, in the third phase, it was aimed to realize the methodology optimization in GC-MS for the analysis geosmin and MIB (2-metilisoborneol) compounds that are responsible for taste/smell of soil and mold in freshwater fish. The results showed a high protein and low lipid content for the three species. The comparison between means and medians revealed symmetry only for protein values and biometric measurements. Lipids, when evaluated only by the means, overestimate the levels for all species. Correlations between body measurements and fillet yield had low correlation, regardless of the species analyzed, and the best prediction equation relates the total weight and fillet weight. The biometric variables were the best discriminating among the species. The evaluation of the washings, it was found that the acidic and basic processes were equally (p ≥ 0.05) efficient (p ≤ 0.05) for the removal of nitrogen compounds on the fish pulps. Regarding the extraction of pigments, a removal efficiency was recorded only for the pacu species, the data were assessed by the parameters L *, a *, b *. When evaluated by the total color difference (ΔE) before and after washing for both processes (acid/alkaline) the ΔE proved feasible perceived by naked eye for all species. The catfish was characterized as the fish that presents the clearest meat with the basic washing considered the most effective in removing pigments for this species. Protein bases obtained by alkaline washes have higher emulsifying capacity (p ≤ 0.05) when compared to unwashed and washed in acid process pulps. The methodology applied for the quantification of MIB and geosmin, allowed to establish that the method of extraction and purification of analytes had low recovery and future studies should be developed for identification and quantification of MIB and geosmin on fish samples.