984 resultados para Efficiency models
Resumo:
The power demand of many mobile working machines such as mine loaders, straddle carriers and harvesters varies significantly during operation, and typically, the average power demand of a working machine is considerably lower than the demand for maximum power. Consequently, for most of the time, the diesel engine of a working machine operates at a poor efficiency far from its optimum efficiency range. However, the energy efficiency of dieseldriven working machines can be improved by electric hybridization. This way, the diesel engine can be dimensioned to operate within its optimum efficiency range, and the electric drive with its energy storages responds to changes in machine loading. A hybrid working machine can be implemented in many ways either as a parallel hybrid, a series hybrid or a combination of these two. The energy efficiency of hybrid working machines can be further enhanced by energy recovery and reuse. This doctoral thesis introduces the component models required in the simulation model of a working machine. Component efficiency maps are applied to the modelling; the efficiency maps for electrical machines are determined analytically in the whole torque–rotational speed plane based on the electricalmachine parameters. Furthermore, the thesis provides simulation models for parallel, series and parallel-series hybrid working machines. With these simulation models, the energy consumption of the working machine can be analysed. In addition, the hybridization process is introduced and described. The thesis provides a case example of the hybridization and dimensioning process of a working machine, starting from the work cycle of the machine. The selection and dimensioning of the hybrid system have a significant impact on the energy consumption of a hybrid working machine. The thesis compares the energy consumption of a working machine implemented by three different hybrid systems (parallel, series and parallel-series) and with different component dimensions. The payback time of a hybrid working machine and the energy storage lifetime are also estimated in the study.
Resumo:
The behavioural finance literature expects systematic and significant deviations from efficiency to persist in securities markets due to behavioural and cognitive biases of investors. These behavioural models attempt to explain the coexistence of intermediate-term momentum and long-term reversals in stock returns based on the systematic violations of rational behaviour of investors. The study investigates the anchoring bias of investors and the profitability of the 52-week momentum strategy (GH henceforward). The relatively highly volatile OMX Helsinki stock exchange is a suitable market for examining the momentum effect, since international investors tend to realise their positions first from the furthest security markets by the time of market turbulence. Empirical data is collected from Thomson Reuters Datastream and the OMX Nordic website. The objective of the study is to provide a throughout research by formulating a self-financing GH momentum portfolio. First, the seasonality of the strategy is examined by taking the January effect into account and researching abnormal returns in long-term. The results indicate that the GH strategy is subject to significantly negative revenues in January, but the strategy is not prone to reversals in long-term. Then the predictive proxies of momentum returns are investigated in terms of acquisition prices and 52-week high statistics as anchors. The results show that the acquisition prices do not have explanatory power over the GH strategy’s abnormal returns. Finally, the efficacy of the GH strategy is examined after taking transaction costs into account, finding that the robust abnormal returns remain statistically significant despite the transaction costs. As a conclusion, the relative distance between a stock’s current price and its 52-week high statistic explains the profits of momentum investing to a high degree. The results indicate that intermediateterm momentum and long-term reversals are separate phenomena. This presents a challenge to current behavioural theories, which model these aspects of stock returns as subsequent components of how securities markets respond to relevant information.
Resumo:
This study will concentrate on Product Data Management (PDM) systems, and sheet metal design features and classification. In this thesis, PDM is seen as an individual system which handles all product-related data and information. The meaning of relevant data is to take the manufacturing process further with fewer errors. The features of sheet metals are giving more information and value to the designed models. The possibility of implementing PDM and sheet metal features recognition are the core of this study. Their integration should make the design process faster and manufacturing-friendly products easier to design. The triangulation method is the basis for this research. The sections of this triangle are: scientific literature review, interview using the Delphi method and the author’s experience and observations. The main key findings of this study are: (1) the area of focus in triangle (the triangle of three different point of views: business, information exchange and technical) depends on the person’s background and their role in the company, (2) the classification in the PDM system (and also in the CAD system) should be done using the materials, tools and machines that are in use in the company and (3) the design process has to be more effective because of the increase of industrial production, sheet metal blank production and the designer’s time spent on actual design and (4) because Design For Manufacture (DFM) integration can be done with CAD-programs, DFM integration with the PDM system should also be possible.
Resumo:
The goal of the thesis is to analyze the strengths and weaknesses of solar PV business model and point out key factors that affect the efficiency of business model, the results are expected to help in creating new business strategy. The methodology of case study research is chosen as theoretical background to structure the design of the thesis indicating how to choose the right research method and conduction of a case study research. Business model canvas is adopted as the tool for analyzing the case studies of SolarCity and Sungevity. The results are presented through the comparison between the cases studies. Solar services and products, cost in customer acquisition, intellectual resource and powerful sales channels are identified as the major factors for TPO model.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
Coronary artery disease is an atherosclerotic disease, which leads to narrowing of coronary arteries, deteriorated myocardial blood flow and myocardial ischaemia. In acute myocardial infarction, a prolonged period of myocardial ischaemia leads to myocardial necrosis. Necrotic myocardium is replaced with scar tissue. Myocardial infarction results in various changes in cardiac structure and function over time that results in “adverse remodelling”. This remodelling may result in a progressive worsening of cardiac function and development of chronic heart failure. In this thesis, we developed and validated three different large animal models of coronary artery disease, myocardial ischaemia and infarction for translational studies. In the first study the coronary artery disease model had both induced diabetes and hypercholesterolemia. In the second study myocardial ischaemia and infarction were caused by a surgical method and in the third study by catheterisation. For model characterisation, we used non-invasive positron emission tomography (PET) methods for measurement of myocardial perfusion, oxidative metabolism and glucose utilisation. Additionally, cardiac function was measured by echocardiography and computed tomography. To study the metabolic changes that occur during atherosclerosis, a hypercholesterolemic and diabetic model was used with [18F] fluorodeoxyglucose ([18F]FDG) PET-imaging technology. Coronary occlusion models were used to evaluate metabolic and structural changes in the heart and the cardioprotective effects of levosimendan during post-infarction cardiac remodelling. Large animal models were used in testing of novel radiopharmaceuticals for myocardial perfusion imaging. In the coronary artery disease model, we observed atherosclerotic lesions that were associated with focally increased [18F]FDG uptake. In heart failure models, chronic myocardial infarction led to the worsening of systolic function, cardiac remodelling and decreased efficiency of cardiac pumping function. Levosimendan therapy reduced post-infarction myocardial infarct size and improved cardiac function. The novel 68Ga-labeled radiopharmaceuticals tested in this study were not successful for the determination of myocardial blood flow. In conclusion, diabetes and hypercholesterolemia lead to the development of early phase atherosclerotic lesions. Coronary artery occlusion produced considerable myocardial ischaemia and later infarction following myocardial remodelling. The experimental models evaluated in these studies will enable further studies concerning disease mechanisms, new radiopharmaceuticals and interventions in coronary artery disease and heart failure.
Resumo:
Nowadays the energy efficiency has become one of the most concerned topics. Compressors are the equipment, which is very common in industry. Moreover, they tend to operate during long cycles and therefore even small decrease in power consumption can significantly reduce electricity costs during the year. And therefore it is important to investigate ways of increasing the energy efficiency of the compressors. In the thesis rotary screw compressor alongside with different control approaches is described. Simulation models for various control types of rotary screw compressor are developed. Analysis of laboratory equipment is conducted and results are compared with simulation. Suggestions of the real laboratory equipment improvement are given.
Resumo:
Transportation plays a major role in the gross domestic product of various nations. There are, however, many obstacles hindering the transportation sector. Cost-efficiency along with proper delivery times, high frequency and reliability are not a straightforward task. Furthermore, environmental friendliness has increased the importance of the whole transportation sector. This development will change roles inside the transportation sector. Even now, but especially in the future, decisions regarding the transportation sector will be partly based on emission levels and other externalities originating from transportation in addition to pure transportation costs. There are different factors, which could have an impact on the transportation sector. IMO’s sulphur regulation is estimated to increase the costs of short sea shipping in the Baltic Sea. Price development of energy could change the roles of different transport modes. Higher awareness of the environmental impacts originating from transportation could also have an impact on the price level of more polluting transport modes. According to earlier research, increased inland transportation, modal shift and slowsteaming can be possible results of these changes in the transportation sector. Possible changes in the transportation sector and ways to settle potential obstacles are studied in this dissertation. Furthermore, means to improve cost-efficiency and to decrease environmental impacts originating from transportation are researched. Hypothetical Finnish dry port network and Rail Baltica transport corridor are studied in this dissertation. Benefits and disadvantages are studied with different methodologies. These include gravitational models, which were optimized with linear integer programming, discrete-event and system dynamics simulation, an interview study and a case study. Geographical focus is on the Baltic Sea Region, but the results can be adapted to other geographical locations with discretion. Results indicate that the dry port concept has benefits, but optimization regarding the location and the amount of dry ports plays an important role. In addition, the utilization of dry ports for freight transportation should be carefully operated, since only a certain amount of total freight volume can be cost-efficiently transported through dry ports. If dry ports are created and located without proper planning, they could actually increase transportation costs and delivery times of the whole transportation system. With an optimized dry port network, transportation costs can be lowered in Finland with three to five dry ports. Environmental impacts can be lowered with up to nine dry ports. If more dry ports are added to the system, the benefits become very minor, i.e. payback time of investments becomes extremely long. Furthermore, dry port network could support major transport corridors such as Rail Baltica. Based on an analysis of statistics and interview study, there could be enough freight volume available for Rail Baltica, especially, if North-West Russia is part of the Northern end of the corridor. Transit traffic to and from Russia (especially through the Baltic States) plays a large role. It could be possible to increase transit traffic through Finland by connecting the potential Finnish dry port network and the studied transport corridor. Additionally, sulphur emission regulation is assumed to increase the attractiveness of Rail Baltica in the year 2015. Part of the transit traffic could be rerouted along Rail Baltica instead of the Baltic Sea, since the price level of sea transport could increase due to the sulphur regulation. Both, the hypothetical Finnish dry port network and Rail Baltica transport corridor could benefit each other. The dry port network could gain more market share from Russia, but also from Central Europe, which is the other end of Rail Baltica. In addition, further Eastern countries could also be connected to achieve higher potential freight volume by rail.
Resumo:
The two central goals of this master's thesis are to serve as a guidebook on the determination of uncertainty in efficiency measurements and to investigate sources of uncertainty in efficiency measurements in the field of electric drives by a literature review, mathematical modeling and experimental means. The influence of individual sources of uncertainty on the total instrumental uncertainty is investigated with the help of mathematical models derived for a balance and a direct air cooled calorimeter. The losses of a frequency converter and an induction motor are measured with the input-output method and a balance calorimeter at 50 and 100 % loads. A software linking features of Matlab and Excel is created to process measurement data, calculate uncertainties and to calculate and visualize results. The uncertainties are combined with both the worst case and the realistic perturbation method and distributions of uncertainty by source are shown based on experimental results. A comparison of the calculated uncertainties suggests that the balance calorimeter determines losses more accurately than the input-output method with a relative RPM uncertainty of 1.46 % compared to 3.78 - 12.74 % respectively with 95 % level of confidence at the 93 % induction motor efficiency or higher. As some principles in uncertainty analysis are open to interpretation the views and decisions of the analyst can have noticeable influence on the uncertainty in the measurement result.
Resumo:
The purpose of this thesis is to examine various policy implementation models, and to determine what use they are to a government. In order to insure that governmental proposals are created and exercised in an effective manner, there roust be some guidelines in place which will assist in resolving difficult situations. All governments face the challenge of responding to public demand, by delivering the type of policy responses that will attempt to answer those demands. The problem for those people in positions of policy-making responsibility is to balance the competitive forces that would influence policy. This thesis examines provincial government policy in two unique cases. The first is the revolutionary recommendations brought forth in the Hall -Dennis Report. The second is the question of extending full -funding to the end of high school in the separate school system. These two cases illustrate how divergent and problematic the policy-making duties of any government may be. In order to respond to these political challenges decision-makers must have a clear understanding of what they are attempting to do. They must also have an assortment of policy-making models that will insure a policy response effectively deals with the issue under examination. A government must make every effort to insure that all policymaking methods are considered, and that the data gathered is inserted into the most appropriate model. Currently, there is considerable debate over the benefits of the progressive individualistic education approach as proposed by the Hall -Dennis Committee. This debate is usually intensified during periods of economic uncertainty. Periodically, the province will also experience brief yet equally intense debate on the question of separate school funding. At one level, this debate centres around the efficiency of maintaining two parallel education systems, but the debate frequently has undertones of the religious animosity common in Ontario's history. As a result of the two policy cases under study we may ask ourselves these questions: a) did the policies in question improve the general quality of life in the province? and b) did the policies unite the province? In the cases of educational instruction and finance the debate is ongoing and unsettling. Currently, there is a widespread belief that provincial students at the elementary and secondary levels of education are not being educated adequately to meet the challenges of the twenty-first century. The perceived culprit is individual education which sees students progressing through the system at their own pace and not meeting adequate education standards. The question of the finance of Catholic education occasionally rears its head in a painful fashion within the province. Some public school supporters tend to take extension as a personal religious defeat, rather than an opportunity to demonstrate that educational diversity can be accommodated within Canada's most populated province. This thesis is an attempt to analyze how successful provincial policy-implementation models were in answering public demand. A majority of the public did not demand additional separate school funding, yet it was put into place. The same majority did insist on an examination of educational methods, and the government did put changes in place. It will also demonstrate how policy if wisely created may spread additional benefits to the public at large. Catholic students currently enjoy a much improved financial contribution from the province, yet these additional funds were taken from somewhere. The public system had it funds reduced with what would appear to be minimal impact. This impact indicates that government policy is still sensitive to the strongly held convictions of those people in opposition to a given policy.
Resumo:
In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the 1976-1992 period. We also test a conditional APT model by using the difference between the 30-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. The conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from a total of 25 securities exchanged on the Brazilian markets. The inclusion of this second factor proves to be crucial for the appropriate pricing of the portfolios.
Resumo:
In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the 1976-1992 period. We also test a conditional APT model by using the difference between the 30-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. the conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from a total of 25 securities exchanged on the Brazilian markets. the inclusion of this second factor proves to be crucial for the appropriate pricing of the portfolios.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.