947 resultados para Parametric VaR (Value-at-Risk)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIMS: Our aims were to evaluate the distribution of troponin I concentrations in population cohorts across Europe, to characterize the association with cardiovascular outcomes, to determine the predictive value beyond the variables used in the ESC SCORE, to test a potentially clinically relevant cut-off value, and to evaluate the improved eligibility for statin therapy based on elevated troponin I concentrations retrospectively.

METHODS AND RESULTS: Based on the Biomarkers for Cardiovascular Risk Assessment in Europe (BiomarCaRE) project, we analysed individual level data from 10 prospective population-based studies including 74 738 participants. We investigated the value of adding troponin I levels to conventional risk factors for prediction of cardiovascular disease by calculating measures of discrimination (C-index) and net reclassification improvement (NRI). We further tested the clinical implication of statin therapy based on troponin concentration in 12 956 individuals free of cardiovascular disease in the JUPITER study. Troponin I remained an independent predictor with a hazard ratio of 1.37 for cardiovascular mortality, 1.23 for cardiovascular disease, and 1.24 for total mortality. The addition of troponin I information to a prognostic model for cardiovascular death constructed of ESC SCORE variables increased the C-index discrimination measure by 0.007 and yielded an NRI of 0.048, whereas the addition to prognostic models for cardiovascular disease and total mortality led to lesser C-index discrimination and NRI increment. In individuals above 6 ng/L of troponin I, a concentration near the upper quintile in BiomarCaRE (5.9 ng/L) and JUPITER (5.8 ng/L), rosuvastatin therapy resulted in higher absolute risk reduction compared with individuals <6 ng/L of troponin I, whereas the relative risk reduction was similar.

CONCLUSION: In individuals free of cardiovascular disease, the addition of troponin I to variables of established risk score improves prediction of cardiovascular death and cardiovascular disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following the intrinsically linked balance sheets in his Capital Formation Life Cycle, Lukas M. Stahl explains with his Triple A Model of Accounting, Allocation and Accountability the stages of the Capital Formation process from FIAT to EXIT. Based on the theoretical foundations of legal risk laid by the International Bar Association with the help of Roger McCormick and legal scholars such as Joanna Benjamin, Matthew Whalley and Tobias Mahler, and founded on the basis of Wesley Hohfeld’s category theory of jural relations, Stahl develops his mutually exclusive Four Determinants of Legal Risk of Law, Lack of Right, Liability and Limitation. Those Four Determinants of Legal Risk allow us to apply, assess, and precisely describe the respective legal risk at all stages of the Capital Formation Life Cycle as demonstrated in case studies of nine industry verticals of the proposed and currently negotiated Transatlantic Trade and Investment Partnership between the United States of America and the European Union, TTIP, as well as in the case of the often cited financing relation between the United States and the People’s Republic of China. Having established the Four Determinants of Legal Risk and its application to the Capital Formation Life Cycle, Stahl then explores the theoretical foundations of capital formation, their historical basis in classical and neo-classical economics and its forefathers such as The Austrians around Eugen von Boehm-Bawerk, Ludwig von Mises and Friedrich von Hayek and most notably and controversial, Karl Marx, and their impact on today’s exponential expansion of capital formation. Starting off with the first pillar of his Triple A Model, Accounting, Stahl then moves on to explain the Three Factors of Capital Formation, Man, Machines and Money and shows how “value-added” is created with respect to the non-monetary capital factors of human resources and industrial production. Followed by a detailed analysis discussing the roles of the Three Actors of Monetary Capital Formation, Central Banks, Commercial Banks and Citizens Stahl readily dismisses a number of myths regarding the creation of money providing in-depth insight into the workings of monetary policy makers, their institutions and ultimate beneficiaries, the corporate and consumer citizens. In his second pillar, Allocation, Stahl continues his analysis of the balance sheets of the Capital Formation Life Cycle by discussing the role of The Five Key Accounts of Monetary Capital Formation, the Sovereign, Financial, Corporate, Private and International account of Monetary Capital Formation and the associated legal risks in the allocation of capital pursuant to his Four Determinants of Legal Risk. In his third pillar, Accountability, Stahl discusses the ever recurring Crisis-Reaction-Acceleration-Sequence-History, in short: CRASH, since the beginning of the millennium starting with the dot-com crash at the turn of the millennium, followed seven years later by the financial crisis of 2008 and the dislocations in the global economy we are facing another seven years later today in 2015 with several sordid debt restructurings under way and hundred thousands of refugees on the way caused by war and increasing inequality. Together with the regulatory reactions they have caused in the form of so-called landmark legislation such as the Sarbanes-Oxley Act of 2002, the Dodd-Frank Act of 2010, the JOBS Act of 2012 or the introduction of the Basel Accords, Basel II in 2004 and III in 2010, the European Financial Stability Facility of 2010, the European Stability Mechanism of 2012 and the European Banking Union of 2013, Stahl analyses the acceleration in size and scope of crises that appears to find often seemingly helpless bureaucratic responses, the inherent legal risks and the complete lack of accountability on part of those responsible. Stahl argues that the order of the day requires to address the root cause of the problems in the form of two fundamental design defects of our Global Economic Order, namely our monetary and judicial order. Inspired by a 1933 plan of nine University of Chicago economists abolishing the fractional reserve system, he proposes the introduction of Sovereign Money as a prerequisite to void misallocations by way of judicial order in the course of domestic and transnational insolvency proceedings including the restructuring of sovereign debt throughout the entire monetary system back to its origin without causing domino effects of banking collapses and failed financial institutions. In recognizing Austrian-American economist Schumpeter’s Concept of Creative Destruction, as a process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one, Stahl responds to Schumpeter’s economic chemotherapy with his Concept of Equitable Default mimicking an immunotherapy that strengthens the corpus economicus own immune system by providing for the judicial authority to terminate precisely those misallocations that have proven malignant causing default perusing the century old common law concept of equity that allows for the equitable reformation, rescission or restitution of contract by way of judicial order. Following a review of the proposed mechanisms of transnational dispute resolution and current court systems with transnational jurisdiction, Stahl advocates as a first step in order to complete the Capital Formation Life Cycle from FIAT, the creation of money by way of credit, to EXIT, the termination of money by way of judicial order, the institution of a Transatlantic Trade and Investment Court constituted by a panel of judges from the U.S. Court of International Trade and the European Court of Justice by following the model of the EFTA Court of the European Free Trade Association. Since the first time his proposal has been made public in June of 2014 after being discussed in academic circles since 2011, his or similar proposals have found numerous public supporters. Most notably, the former Vice President of the European Parliament, David Martin, has tabled an amendment in June 2015 in the course of the negotiations on TTIP calling for an independent judicial body and the Member of the European Commission, Cecilia Malmström, has presented her proposal of an International Investment Court on September 16, 2015. Stahl concludes, that for the first time in the history of our generation it appears that there is a real opportunity for reform of our Global Economic Order by curing the two fundamental design defects of our monetary order and judicial order with the abolition of the fractional reserve system and the introduction of Sovereign Money and the institution of a democratically elected Transatlantic Trade and Investment Court that commensurate with its jurisdiction extending to cases concerning the Transatlantic Trade and Investment Partnership may complete the Capital Formation Life Cycle resolving cases of default with the transnational judicial authority for terminal resolution of misallocations in a New Global Economic Order without the ensuing dangers of systemic collapse from FIAT to EXIT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. To evaluate the contribution of the ocular risk factors in the conversion of the fellow eye of patients with unilateral exudative AMD, using a novel semiautomated grading system. Materials and Methods. Single-center, retrospective study including 89 consecutive patients with unilateral exudative AMD and ≥3 years of followup. Baseline color fundus photographs were graded using an innovative grading software, RetmarkerAMD (Critical Health SA). Results. The follow-up period was 60.9 ± 31.3 months. The occurrence of CNV was confirmed in 42 eyes (47.2%). The cumulative incidence of CNV was 23.6% at 2 years, 33.7% at 3 years, 39.3% at 5 years, and 47.2% at 10 years, with a mean annual incidence of 12.0% (95% CI = 0.088-0.162). The absolute number of drusen in the central 1000 and 3000  μ m (P < 0.05) and the absolute number of drusen ≥125 µm in the central 3000 and 6000 µm (P < 0.05) proved to be significant risk factors for CNV. Conclusion. The use of quantitative variables in the determination of the OR of developing CNV allowed the establishment of significant risk factors for neovascularization. The long follow-up period and the innovative methodology reinforce the value of our results. This trial is registered with ClinicalTrials.gov NCT00801541.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a parametric semilinear Dirichlet problem driven by the Laplacian plus an indefinite unbounded potential and with a reaction of superdifissive type. Using variational and truncation techniques, we show that there exists a critical parameter value λ_{∗}>0 such that for all λ> λ_{∗} the problem has least two positive solutions, for λ= λ_{∗} the problem has at least one positive solutions, and no positive solutions exist when λ∈(0,λ_{∗}). Also, we show that for λ≥ λ_{∗} the problem has a smallest positive solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finance is one of the fastest growing areas in modern applied mathematics with real world applications. The interest of this branch of applied mathematics is best described by an example involving shares. Shareholders of a company receive dividends which come from the profit made by the company. The proceeds of the company, once it is taken over or wound up, will also be distributed to shareholders. Therefore shares have a value that reflects the views of investors about the likely dividend payments and capital growth of the company. Obviously such value will be quantified by the share price on stock exchanges. Therefore financial modelling serves to understand the correlations between asset and movements of buy/sell in order to reduce risk. Such activities depend on financial analysis tools being available to the trader with which he can make rapid and systematic evaluation of buy/sell contracts. There are other financial activities and it is not an intention of this paper to discuss all of these activities. The main concern of this paper is to propose a parallel algorithm for the numerical solution of an European option. This paper is organised as follows. First, a brief introduction is given of a simple mathematical model for European options and possible numerical schemes of solving such mathematical model. Second, Laplace transform is applied to the mathematical model which leads to a set of parametric equations where solutions of different parametric equations may be found concurrently. Numerical inverse Laplace transform is done by means of an inversion algorithm developed by Stehfast. The scalability of the algorithm in a distributed environment is demonstrated. Third, a performance analysis of the present algorithm is compared with a spatial domain decomposition developed particularly for time-dependent heat equation. Finally, a number of issues are discussed and future work suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sandpits used by children are frequently visited by wild life which constitutes a source of fungal pathogens and allergenic fungi. This study aimed to take an unannounced snapshot of the urban levels of fungal contaminants in sands, using for this purpose two public recreational parks, three elementary schools and two kindergartens. All samples were from Lisbon and neighboring municipalities and were tested for fungi of clinical interest. Potentially pathogenic fungi were isolated from all samples besides one. Fusarium dimerum (32.4%) was found to be the dominant species in one park and Chrysonilia spp. in the other (46.6%). Fourteen different species and genera were detected and no dermatophytes were found. Of a total of 14 species and genera, the fungi most isolated from the samples of the elementary schools were Penicillium spp. (74%), Cladophialophora spp. (38%) and Cladosporium spp. (90%). Five dominant species and genera were isolated from the kindergartens. Penicillium spp. was the only genus isolated in one, though with remarkably high counts (32500 colony forming units per gram). In the other kindergarten Penicillium spp. were also the most abundant species, occupying 69% of all the fungi found. All of the samples exceeded the Maximum Recommended Value (MRV) for beach sand defined by Brandão et al. 2011, which are currently the only quantitative guidelines available for the same matrix. The fungi found confirm the potential risk of exposure of children to keratinophilic fungi and demonstrates that regular cleaning or replacing of sand needs to be implemented in order to minimize contamination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este estudio empírico compara la capacidad de los modelos Vectores auto-regresivos (VAR) sin restricciones para predecir la estructura temporal de las tasas de interés en Colombia -- Se comparan modelos VAR simples con modelos VAR aumentados con factores macroeconómicos y financieros colombianos y estadounidenses -- Encontramos que la inclusión de la información de los precios del petróleo, el riesgo de crédito de Colombia y un indicador internacional de la aversión al riesgo mejora la capacidad de predicción fuera de la muestra de los modelos VAR sin restricciones para vencimientos de corto plazo con frecuencia mensual -- Para vencimientos de mediano y largo plazo los modelos sin variables macroeconómicas presentan mejores pronósticos sugiriendo que las curvas de rendimiento de mediano y largo plazo ya incluyen toda la información significativa para pronosticarlos -- Este hallazgo tiene implicaciones importantes para los administradores de portafolios, participantes del mercado y responsables de las políticas

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis examines the spatial and temporal variation in nitrogen dioxide (NO2) levels in Guernsey and the impacts on pre-existing asthmatics. Whilst air quality in Guernsey is generally good, the levels of NO2 exceed UK standards in several locations. The evidence indicates that people suffering from asthma have exacerbation of their symptoms if exposed to elevated levels of air pollutants including NO2, although this research has never been carried out in Guernsey before. In addition, exposure assessment of individuals is rarely carried out and research in this area is limited due to the complexity of undertaking such a study, which will include a combination of exposures in the home, the workplace and ambient exposures, which vary depending on the individual daily experience. For the first time in Guernsey, this research has examined NO2 levels in correlation with asthma patient admissions to hospital, assessment of NO2 exposures in typical homes and typical workplaces in Guernsey. The data showed a temporal correlation between NO2 levels and the number of hospital admissions and the trend from 2008-2012 was upwards. Statistical analysis of the data did not show a significant linear correlation due to the small size of the data sets. Exposure assessment of individuals showed a spatial variation in exposures in Guernsey and assessment in indoor environments showed that real-time analysis of NO2 levels needs to be undertaken if indoor micro environments for NO2 are the be assessed adequately. There was temporal and spatial variation in NO2 concentrations measured using diffusion tubes, which provide a monthly mean value, and analysers measuring NO2 concentrations in real time. The research shows that building layout and design are important factors for good air flow and ventilation and the dispersion of NO2 indoors. Environmental Health Officers have statutory responsibilities for ambient air quality, hygiene of buildings and workplace environments and this role needs to be co-ordinated with healthcare professionals to improve health outcomes for asthmatics. The outcome of the thesis was the development of a risk management framework for pre-existing asthmatics at work for use by regulators of workplaces and an information leaflet to assist in improving health outcomes for asthmatics in Guernsey.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIMS: Renal dysfunction is a powerful predictor of adverse outcomes in patients hospitalized for acute coronary syndrome. Three new glomerular filtration rate (GFR) estimating equations recently emerged, based on serum creatinine (CKD-EPIcreat), serum cystatin C (CKD-EPIcyst) or a combination of both (CKD-EPIcreat/cyst), and they are currently recommended to confirm the presence of renal dysfunction. Our aim was to analyse the predictive value of these new estimated GFR (eGFR) equations regarding mid-term mortality in patients with acute coronary syndrome, and compare them with the traditional Modification of Diet in Renal Disease (MDRD-4) formula. METHODS AND RESULTS: 801 patients admitted for acute coronary syndrome (age 67.3±13.3 years, 68.5% male) and followed for 23.6±9.8 months were included. For each equation, patient risk stratification was performed based on eGFR values: high-risk group (eGFR<60ml/min per 1.73m2) and low-risk group (eGFR⩾60ml/min per 1.73m2). The predictive performances of these equations were compared using area under each receiver operating characteristic curves (AUCs). Overall risk stratification improvement was assessed by the net reclassification improvement index. The incidence of the primary endpoint was 18.1%. The CKD-EPIcyst equation had the highest overall discriminate performance regarding mid-term mortality (AUC 0.782±0.20) and outperformed all other equations (ρ<0.001 in all comparisons). When compared with the MDRD-4 formula, the CKD-EPIcyst equation accurately reclassified a significant percentage of patients into more appropriate risk categories (net reclassification improvement index of 11.9% (p=0.003)). The CKD-EPIcyst equation added prognostic power to the Global Registry of Acute Coronary Events (GRACE) score in the prediction of mid-term mortality. CONCLUSION: The CKD-EPIcyst equation provides a novel and improved method for assessing the mid-term mortality risk in patients admitted for acute coronary syndrome, outperforming the most widely used formula (MDRD-4), and improving the predictive value of the GRACE score. These results reinforce the added value of cystatin C as a risk marker in these patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Risk assessment is fundamental in the management of acute coronary syndromes (ACS), enabling estimation of prognosis. AIMS: To evaluate whether the combined use of GRACE and CRUSADE risk stratification schemes in patients with myocardial infarction outperforms each of the scores individually in terms of mortality and haemorrhagic risk prediction. METHODS: Observational retrospective single-centre cohort study including 566 consecutive patients admitted for non-ST-segment elevation myocardial infarction. The CRUSADE model increased GRACE discriminatory performance in predicting all-cause mortality, ascertained by Cox regression, demonstrating CRUSADE independent and additive predictive value, which was sustained throughout follow-up. The cohort was divided into four different subgroups: G1 (GRACE<141; CRUSADE<41); G2 (GRACE<141; CRUSADE≥41); G3 (GRACE≥141; CRUSADE<41); G4 (GRACE≥141; CRUSADE≥41). RESULTS: Outcomes and variables estimating clinical severity, such as admission Killip-Kimbal class and left ventricular systolic dysfunction, deteriorated progressively throughout the subgroups (G1 to G4). Survival analysis differentiated three risk strata (G1, lowest risk; G2 and G3, intermediate risk; G4, highest risk). The GRACE+CRUSADE model revealed higher prognostic performance (area under the curve [AUC] 0.76) than GRACE alone (AUC 0.70) for mortality prediction, further confirmed by the integrated discrimination improvement index. Moreover, GRACE+CRUSADE combined risk assessment seemed to be valuable in delineating bleeding risk in this setting, identifying G4 as a very high-risk subgroup (hazard ratio 3.5; P<0.001). CONCLUSIONS: Combined risk stratification with GRACE and CRUSADE scores can improve the individual discriminatory power of GRACE and CRUSADE models in the prediction of all-cause mortality and bleeding. This combined assessment is a practical approach that is potentially advantageous in treatment decision-making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. To evaluate the contribution of the ocular risk factors in the conversion of the fellow eye of patients with unilateral exudative AMD, using a novel semiautomated grading system. Materials and Methods. Single-center, retrospective study including 89 consecutive patients with unilateral exudative AMD and ≥3 years of followup. Baseline color fundus photographs were graded using an innovative grading software, RetmarkerAMD (Critical Health SA). Results. The follow-up period was 60.9 ± 31.3 months. The occurrence of CNV was confirmed in 42 eyes (47.2%). The cumulative incidence of CNV was 23.6% at 2 years, 33.7% at 3 years, 39.3% at 5 years, and 47.2% at 10 years, with a mean annual incidence of 12.0% (95% CI = 0.088-0.162). The absolute number of drusen in the central 1000 and 3000  μ m (P < 0.05) and the absolute number of drusen ≥125 µm in the central 3000 and 6000 µm (P < 0.05) proved to be significant risk factors for CNV. Conclusion. The use of quantitative variables in the determination of the OR of developing CNV allowed the establishment of significant risk factors for neovascularization. The long follow-up period and the innovative methodology reinforce the value of our results. This trial is registered with ClinicalTrials.gov NCT00801541.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Australian forest industries have a long history of export trade of a wide range of products from woodchips (for paper manufacturing), sandalwood (essential oils, carving and incense) to high value musical instruments, flooring and outdoor furniture. For the high value group, fluctuating environmental conditions brought on by changes in temperature and relative humidity, can lead to performance problems due to consequential swelling, shrinkage and/or distortion of the wood elements. A survey determined the types of value-added products exported, including species and dimensions packaging used and export markets. Data loggers were installed with shipments to monitor temperature and relative humidity conditions. These data were converted to timber equilibrium moisture content values to provide an indication of the environment that the wood elements would be acclimatising to. The results of the initial survey indicated that primary high value wood export products included guitars, flooring, decking and outdoor furniture. The destination markets were mainly located in the northern hemisphere, particularly the United States of America, China, Hong Kong, Europe (including the United Kingdom), Japan, Korea and the Middle East. Other regions importing Australian-made wooden articles were south-east Asia, New Zealand and South Africa. Different timber species have differing rates of swelling and shrinkage, so the types of timber were also recorded during the survey. Results from this work determined that the major species were ash-type eucalypts from south-eastern Australia (commonly referred to in the market as Tasmanian oak), jarrah from Western Australia, spotted gum, hoop pine, white cypress, black butt, brush box and Sydney blue gum from Queensland and New South Wales. The environmental conditions data indicated that microclimates in shipping containers can fluctuate extensively during shipping. Conditions at the time of manufacturing were usually between 10 and 12% equilibrium moisture content, however conditions during shipping could range from 5 (very dry) to 20% (very humid). The packaging systems incorporated were reported to be efficient at protecting the wooden articles from damage during transit. The research highlighted the potential risk for wood components to ‘move’ in response to periods of drier or more humid conditions than those at the time of manufacturing, and the importance of engineering a packaging system that can account for the environmental conditions experienced in shipping containers. Examples of potential dimensional changes in wooden components were calculated based on published unit shrinkage data for key species and the climatic data returned from the logging equipment. The information highlighted the importance of good design to account for possible timber movement during shipping. A timber movement calculator was developed to allow designers to input component species, dimensions, site of manufacture and destination, to see validate their product design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previously conducted research projects in the field of logistics services have emphasized the importance of value added services in customer value creation. Through value added services companies can extend their service portfolio and gain higher customer satisfaction and loyalty. In more general level service marketing has been recognized to be challenging due the intangible nature of services. This has caused issues in pricing and value perceptions. To tackle these issues scholars have suggested well–managed customer reference marketing practices. The main goal of this research work is to identify shortages in the current service offering. Additionally, the focus is on, how these shortages can be fixed. Due the low capacity utilization of warehouse premises, there is a need to find the main factors, which are causing or affecting on the current situation. The research aims to offer a set of alternatives how to come over these issues. All the potential business opportunities are evaluated and the promising prospects are discussed. The focus is on logistics value added services and how those effect on route decisions in logistics. Simultaneously the aim is to create a holistic understanding of how added value and offered services effect on logistics centralization. Moreover, customer value creation and customer references’ effectiveness in logistics service marketing are emphasized in this project. Logistics value added services have a minor effect on logistics decision. Routes are chosen on a low–cost basis. However, it is challenging to track down logistics costs and break those down into different phases. Customer value as such is a difficult concept. This causes challenges when services are sold with value–based principles. Customer references are useful for logistics service providers and this should be exploited in marketing. Those reduce the perceived risk and give credibility to the service provider.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies the field of asset price bubbles. It is comprised of three independent chapters. Each of these chapters either directly or indirectly analyse the existence or implications of asset price bubbles. The type of bubbles assumed in each of these chapters is consistent with rational expectations. Thus, the kind of price bubbles investigated here are known as rational bubbles in the literature. The following describes the three chapters. Chapter 1: This chapter attempts to explain the recent US housing price bubble by developing a heterogeneous agent endowment economy asset pricing model with risky housing, endogenous collateral and defaults. Investment in housing is subject to an idiosyncratic risk and some mortgages are defaulted in equilibrium. We analytically derive the leverage or the endogenous loan to value ratio. This variable comes from a limited participation constraint in a one period mortgage contract with monitoring costs. Our results show that low values of housing investment risk produces a credit easing effect encouraging excess leverage and generates credit driven rational price bubbles in the housing good. Conversely, high values of housing investment risk produces a credit crunch characterized by tight borrowing constraints, low leverage and low house prices. Furthermore, the leverage ratio was found to be procyclical and the rate of defaults countercyclical consistent with empirical evidence. Chapter 2: It is widely believed that financial assets have considerable persistence and are susceptible to bubbles. However, identification of this persistence and potential bubbles is not straightforward. This chapter tests for price bubbles in the United States housing market accounting for long memory and structural breaks. The intuition is that the presence of long memory negates price bubbles while the presence of breaks could artificially induce bubble behaviour. Hence, we use procedures namely semi-parametric Whittle and parametric ARFIMA procedures that are consistent for a variety of residual biases to estimate the value of the long memory parameter, d, of the log rent-price ratio. We find that the semi-parametric estimation procedures robust to non-normality and heteroskedasticity errors found far more bubble regions than parametric ones. A structural break was identified in the mean and trend of all the series which when accounted for removed bubble behaviour in a number of regions. Importantly, the United States housing market showed evidence for rational bubbles at both the aggregate and regional levels. In the third and final chapter, we attempt to answer the following question: To what extend should individuals participate in the stock market and hold risky assets over their lifecycle? We answer this question by employing a lifecycle consumption-portfolio choice model with housing, labour income and time varying predictable returns where the agents are constrained in the level of their borrowing. We first analytically characterize and then numerically solve for the optimal asset allocation on the risky asset comparing the return predictability case with that of IID returns. We successfully resolve the puzzles and find equity holding and participation rates close to the data. We also find that return predictability substantially alter both the level of risky portfolio allocation and the rate of stock market participation. High factor (dividend-price ratio) realization and high persistence of factor process indicative of stock market bubbles raise the amount of wealth invested in risky assets and the level of stock market participation, respectively. Conversely, rare disasters were found to bring down these rates, the change being severe for investors in the later years of the life-cycle. Furthermore, investors following time varying returns (return predictability) hedged background risks significantly better than the IID ones.