833 resultados para Robustness
Resumo:
Virtual environments and real-time simulators (VERS) are becoming more and more important tools in research and development (R&D) process of non-road mobile machinery (NRMM). The virtual prototyping techniques enable faster and more cost-efficient development of machines compared to use of real life prototypes. High energy efficiency has become an important topic in the world of NRMM because of environmental and economic demands. The objective of this thesis is to develop VERS based methods for research and development of NRMM. A process using VERS for assessing effects of human operators on the life-cycle efficiency of NRMM was developed. Human in the loop simulations are ran using an underground mining loader to study the developed process. The simulations were ran in the virtual environment of the Laboratory of Intelligent Machines of Lappeenranta University of Technology. A physically adequate real-time simulation model of NRMM was shown to be reliable and cost effective in testing of hardware components by the means of hardware-in-the-loop (HIL) simulations. A control interface connecting integrated electro-hydraulic energy converter (IEHEC) with virtual simulation model of log crane was developed. IEHEC consists of a hydraulic pump-motor and an integrated electrical permanent magnet synchronous motorgenerator. The results show that state of the art real-time NRMM simulators are capable to solve factors related to energy consumption and productivity of the NRMM. A significant variation between the test drivers is found. The results show that VERS can be used for assessing human effects on the life-cycle efficiency of NRMM. HIL simulation responses compared to that achieved with conventional simulation method demonstrate the advances and drawbacks of various possible interfaces between the simulator and hardware part of the system under study. Novel ideas for arranging the interface are successfully tested and compared with the more traditional one. The proposed process for assessing the effects of operators on the life-cycle efficiency will be applied for wider group of operators in the future. Driving styles of the operators can be analysed statistically from sufficient large result data. The statistical analysis can find the most life-cycle efficient driving style for the specific environment and machinery. The proposed control interface for HIL simulation need to be further studied. The robustness and the adaptation of the interface in different situations must be verified. The future work will also include studying the suitability of the IEHEC for different working machines using the proposed HIL simulation method.
Resumo:
Traditionally real estate has been seen as a good diversification tool for a stock portfolio due to the lower return and volatility characteristics of real estate investments. However, the diversification benefits of a multi-asset portfolio depend on how the different asset classes co-move in the short- and long-run. As the asset classes are affected by the same macroeconomic factors, interrelationships limiting the diversification benefits could exist. This master’s thesis aims to identify such dynamic linkages in the Finnish real estate and stock markets. The results are beneficial for portfolio optimization tasks as well as for policy-making. The real estate industry can be divided into direct and securitized markets. In this thesis the direct market is depicted by the Finnish housing market index. The securitized market is proxied by the Finnish all-sectors securitized real estate index and by a European residential Real Estate Investment Trust index. The stock market is depicted by OMX Helsinki Cap index. Several macroeconomic variables are incorporated as well. The methodology of this thesis is based on the Vector Autoregressive (VAR) models. The long-run dynamic linkages are studied with Johansen’s cointegration tests and the short-run interrelationships are examined with Granger-causality tests. In addition, impulse response functions and forecast error variance decomposition analyses are used for robustness checks. The results show that long-run co-movement, or cointegration, did not exist between the housing and stock markets during the sample period. This indicates diversification benefits in the long-run. However, cointegration between the stock and securitized real estate markets was identified. This indicates limited diversification benefits and shows that the listed real estate market in Finland is not matured enough to be considered a separate market from the general stock market. Moreover, while securitized real estate was shown to cointegrate with the housing market in the long-run, the two markets are still too different in their characteristics to be used as substitutes in a multi-asset portfolio. This implies that the capital intensiveness of housing investments cannot be circumvented by investing in securitized real estate.
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Resumo:
Industrial production of pulp and paper is an intensive consumer of energy, natural resources, and chemicals that result in a big carbon footprint of the final product. At present companies and industries aspire to calculate their gas emissions into the atmosphere in order to afterwards reduce atmospheric contamination. One of the approaches allowing to increase carbon burden from the pulp and paper manufacture is paper recycling. The general purpose of the current paper is to establish methods of quantifying and minimizing the carbon footprint of paper. The first target of this research is to derive a mathematical relationship between virgin fibre requirements with respect to the amount of recycled paper used in the pulp. One more purpose is to establish a model to be used to clarify the contribution of recycling and transportation to decreasing carbon dioxide emissions. For this study sensitivity analysis is used to investigate the robustness of obtained results. The results of the present study show that an increasing of recycling rate does not always lead to minimizing the carbon footprint. Additionally, we derived that transportation of waste paper throughout distances longer than 5800 km has no sense because the use of that paper will only increase carbon dioxide emissions and it is better to reject recycling at all. Finally, we designed the model for organization of a new supply chain of paper product to a customer. The models were implemented as reusable MATLAB frameworks.
Resumo:
Optimization of quantum measurement processes has a pivotal role in carrying out better, more accurate or less disrupting, measurements and experiments on a quantum system. Especially, convex optimization, i.e., identifying the extreme points of the convex sets and subsets of quantum measuring devices plays an important part in quantum optimization since the typical figures of merit for measuring processes are affine functionals. In this thesis, we discuss results determining the extreme quantum devices and their relevance, e.g., in quantum-compatibility-related questions. Especially, we see that a compatible device pair where one device is extreme can be joined into a single apparatus essentially in a unique way. Moreover, we show that the question whether a pair of quantum observables can be measured jointly can often be formulated in a weaker form when some of the observables involved are extreme. Another major line of research treated in this thesis deals with convex analysis of special restricted quantum device sets, covariance structures or, in particular, generalized imprimitivity systems. Some results on the structure ofcovariant observables and instruments are listed as well as results identifying the extreme points of covariance structures in quantum theory. As a special case study, not published anywhere before, we study the structure of Euclidean-covariant localization observables for spin-0-particles. We also discuss the general form of Weyl-covariant phase-space instruments. Finally, certain optimality measures originating from convex geometry are introduced for quantum devices, namely, boundariness measuring how ‘close’ to the algebraic boundary of the device set a quantum apparatus is and the robustness of incompatibility quantifying the level of incompatibility for a quantum device pair by measuring the highest amount of noise the pair tolerates without becoming compatible. Boundariness is further associated to minimum-error discrimination of quantum devices, and robustness of incompatibility is shown to behave monotonically under certain compatibility-non-decreasing operations. Moreover, the value of robustness of incompatibility is given for a few special device pairs.
Resumo:
Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.
Resumo:
This work expands the classical Nelson and Winter model of Schumpeterian competition by including two sectors and a North-South dynamics, with a view to analyzing how different institutions and technological regimes affect the processes of convergence and divergence in the international economy. The results suggest that convergence may emerge out of the efforts for imitation in the South when the technological regime is cumulative. But when the regime is science-based, imitation is not enough for a successful catching-up. In this case convergence requires the South to invest in innovation as well. The work also analyses the robustness of the model results using Montecarlo techniques.
Resumo:
Brazil's Post War economic history has been marked by inflationary booms and busts, which kept large parts of the population poor, as income distribution remained highly skewed, and most governments failed to put enough efforts and resources into education and health. That seems to have changed recently, as an increasing number of studies have shown considerable advances in the incomes of the lower and the middle classes. This essay examines those findings and puts them into a historical perspective, discussing earlier attempts and hopes of Brazilian policy makers to advance the welfare of the population. It concludes that while the last fifteen years have been remarkable for the country to achieve macroeconomic stability and while the increasing efforts of supporting the poor seemed to have been moving income distribution slowly towards a more equal level, there is still a long way to go. The 2008 world financial crisis also hit Brazil hard, but the recovery has been smoother and faster than in any OECD country. The impact of the current crisis may provide a good test as to the robustness of the previous trends to further the wellbeing of the poor and the middle class
Resumo:
In the last few decades, banking has strongly internationalized and become more complex. Hence, bank supervision and regulation has taken global perspective, too. The most important international regulation are the Basel frameworks by the Basel committee on banking supervision. This study examines the effects of bank supervision and regulation, especially the Basel II, on bank risk and risk-taking. In order to separate and recognize the efficiency of these effects, the co-effects of many supervisory and regulatory tools together with other relevant factors must be taken into account. The focus of the study is on the effects of asymmetric information and banking procyclicality on the efficiency of the Basel II. This study tries to find an answer, if the Basel II, implemented in 2008, has decreased bank risk in banks of European Union member states. This study examines empirically, if the volatility on bank stock returns have changed after the implementation of the Basel II. Panel data consists of 62 bank stock returns, bank-specific variables, economic variables and variables concerning regulatory environment between 2003 and 2011. Fixed effects regression is used for panel data analysis. Results indicate that volatility on bank stock returns has increased after 2008 and the implementation of the Basel II. Result is statistically very significant and robustness has been verified in different model specifications. The result of this study contradicts with the goal of the Basel II about banking system stability. Banking procyclicality and wrong incentives for regulatory arbitrage under asymmetric information explained in theoretical part may explain this result. On the other hand, simultaneously with the implementation of the Basel II, the global financial crisis emerged and caused severe losses in banks and increased stock volatility. However, it is clear that supervision and regulation was unable to prevent the global financial crisis. After the financial crisis, supervision and regulation have been reformed globally. The main problems of the Basel II, examined in the theoretical part, have been recognized in order to prevent problems of procyclicality and wrong incentives in the future.
Resumo:
Small investors' sentiment has been proposed by behaviouralists to explain the existence and behavior of discount on closed-end funds (CEFD). The empirical tests of this sentiment hypothesis so far provide equivocal results. Besides, most of out-of-sample tests outside U.S. are not robust in the sense that they fail to well control other firm characteristics and risk factors that may explain stock return and to provide a formal cross-sectional test of the link between CEFD and stock return. This thesis explores the role of CEFD in asset pricing and further validates CEFD as a sentiment proxy in Canadian context and augments the extant studies by examining the redemption feature inherent in Canadian closed-end funds and by enhancing the robustness of the empirical tests. Our empirical results document differential behaviors in discounts between redeemable funds and non-redeemable funds. However, we don't find supportive evidence of CEFD as a priced factor. Specifically, the stocks with different exposures to CEFD fail to provide significantly different average return. Nor does CEFD provide significant incremental explanatory power, after controlling other well-known firm characteristics and risk factors, in cross-sectional as well as time-series variation of stock return. This evidence, together with the findings from our direct test of CEFD as a sentiment index, suggests that CEFD, even the discount on traditional non-redeemable closed-end funds, is unlikely to be driven by elusive sentiment in Canada.
Resumo:
The aim of this thesis is to price options on equity index futures with an application to standard options on S&P 500 futures traded on the Chicago Mercantile Exchange. Our methodology is based on stochastic dynamic programming, which can accommodate European as well as American options. The model accommodates dividends from the underlying asset. It also captures the optimal exercise strategy and the fair value of the option. This approach is an alternative to available numerical pricing methods such as binomial trees, finite differences, and ad-hoc numerical approximation techniques. Our numerical and empirical investigations demonstrate convergence, robustness, and efficiency. We use this methodology to value exchange-listed options. The European option premiums thus obtained are compared to Black's closed-form formula. They are accurate to four digits. The American option premiums also have a similar level of accuracy compared to premiums obtained using finite differences and binomial trees with a large number of time steps. The proposed model accounts for deterministic, seasonally varying dividend yield. In pricing futures options, we discover that what matters is the sum of the dividend yields over the life of the futures contract and not their distribution.
Resumo:
Dynamic logic is an extension of modal logic originally intended for reasoning about computer programs. The method of proving correctness of properties of a computer program using the well-known Hoare Logic can be implemented by utilizing the robustness of dynamic logic. For a very broad range of languages and applications in program veri cation, a theorem prover named KIV (Karlsruhe Interactive Veri er) Theorem Prover has already been developed. But a high degree of automation and its complexity make it di cult to use it for educational purposes. My research work is motivated towards the design and implementation of a similar interactive theorem prover with educational use as its main design criteria. As the key purpose of this system is to serve as an educational tool, it is a self-explanatory system that explains every step of creating a derivation, i.e., proving a theorem. This deductive system is implemented in the platform-independent programming language Java. In addition, a very popular combination of a lexical analyzer generator, JFlex, and the parser generator BYacc/J for parsing formulas and programs has been used.
Resumo:
The initial timing of face-specific effects in event-related potentials (ERPs) is a point of contention in face processing research. Although effects during the time of the N170 are robust in the literature, inconsistent effects during the time of the P100 challenge the interpretation of the N170 as being the initial face-specific ERP effect. The interpretation of the early P100 effects are often attributed to low-level differences between face stimuli and a host of other image categories. Research using sophisticated controls for low-level stimulus characteristics (Rousselet, Husk, Bennett, & Sekuler, 2008) report robust face effects starting at around 130 ms following stimulus onset. The present study examines the independent components (ICs) of the P100 and N170 complex in the context of a minimally controlled low-level stimulus set and a clear P100 effect for faces versus houses at the scalp. Results indicate that four ICs account for the ERPs to faces and houses in the first 200ms following stimulus onset. The IC that accounts for the majority of the scalp N170 (icNla) begins dissociating stimulus conditions at approximately 130 ms, closely replicating the scalp results of Rousselet et al. (2008). The scalp effects at the time of the P100 are accounted for by two constituent ICs (icP1a and icP1b). The IC that projects the greatest voltage at the scalp during the P100 (icP1a) shows a face-minus-house effect over the period of the P100 that is less robust than the N 170 effect of icN 1 a when measured as the average of single subject differential activation robustness. The second constituent process of the P100 (icP1b), although projecting a smaller voltage to the scalp than icP1a, shows a more robust effect for the face-minus-house contrast starting prior to 100 ms following stimulus onset. Further, the effect expressed by icP1 b takes the form of a larger negative projection to medial occipital sites for houses over faces partially canceling the larger projection of icP1a, thereby enhancing the face positivity at this time. These findings have three main implications for ERP research on face processing: First, the ICs that constitute the face-minus-house P100 effect are independent from the ICs that constitute the N170 effect. This suggests that the P100 effect and the N170 effect are anatomically independent. Second, the timing of the N170 effect can be recovered from scalp ERPs that have spatio-temporally overlapping effects possibly associated with low-level stimulus characteristics. This unmixing of the EEG signals may reduce the need for highly constrained stimulus sets, a characteristic that is not always desirable for a topic that is highly coupled to ecological validity. Third, by unmixing the constituent processes of the EEG signals new analysis strategies are made available. In particular the exploration of the relationship between cortical processes over the period of the P100 and N170 ERP complex (and beyond) may provide previously unaccessible answers to questions such as: Is the face effect a special relationship between low-level and high-level processes along the visual stream?
Resumo:
This thesis investigates the pricing effects of idiosyncratic moments. We document that idiosyncratic moments, namely idiosyncratic skewness and idiosyncratic kurtosis vary over time. If a factor/characteristic is priced, it must show minimum variation to be correlated with stock returns. Moreover, we can identify two structural breaks in the time series of idiosyncratic kurtosis. Using a sample of US stocks traded on NYSE, AMEX and NASDAQ markets from January 1970 to December 2013, we run Fama-MacBeth test at the individual stock level. We document a negative and significant pricing effect of idiosyncratic skewness, consistent with the finding of Boyer et al. (2010). We also report that neither idiosyncratic volatility nor idiosyncratic kurtosis are consistently priced. We run robustness tests using different model specifications and period sub-samples. Our results are robust to the different factors and characteristics usually included in the Fama-MacBeth pricing tests. We also split first our sample using endogenously determined structural breaks. Second, we divide our sample into three equal sub-periods. The results are consistent with our main findings suggesting that expected returns of individual stocks are explained by idiosyncratic skewness. Both idiosyncratic volatility and idiosyncratic kurtosis are irrelevant to asset prices at the individual stock level. As an alternative method, we run Fama-MacBeth tests at the portfolio level. We find that idiosyncratic skewness is not significantly related to returns on idiosyncratic skewness-sorted portfolios. However, it is significant when tested against idiosyncratic kurtosis sorted portfolios.
Resumo:
This thesis investigates the pricing effects of idiosyncratic moments. We document that idiosyncratic moments, namely idiosyncratic skewness and idiosyncratic kurtosis vary over time. If a factor/characteristic is priced, it must show minimum variation to be correlated with stock returns. Moreover, we can identify two structural breaks in the time series of idiosyncratic kurtosis. Using a sample of US stocks traded on NYSE, AMEX and NASDAQ markets from January 1970 to December 2013, we run Fama-MacBeth test at the individual stock level. We document a negative and significant pricing effect of idiosyncratic skewness, consistent with the finding of Boyer et al. (2010). We also report that neither idiosyncratic volatility nor idiosyncratic kurtosis are consistently priced. We run robustness tests using different model specifications and period sub-samples. Our results are robust to the different factors and characteristics usually included in the Fama-MacBeth pricing tests. We also split first our sample using endogenously determined structural breaks. Second, we divide our sample into three equal sub-periods. The results are consistent with our main findings suggesting that expected returns of individual stocks are explained by idiosyncratic skewness. Both idiosyncratic volatility and idiosyncratic kurtosis are irrelevant to asset prices at the individual stock level. As an alternative method, we run Fama-MacBeth tests at the portfolio level. We find that idiosyncratic skewness is not significantly related to returns on idiosyncratic skewness-sorted portfolios. However, it is significant when tested against idiosyncratic kurtosis sorted portfolios.