796 resultados para Empirical Algorithm Analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Innovation has been widely recognized as an important driver of firm competitiveness, and the firm’s internal research and development (R&D) activities are often considered to have a critical role in innovation activities. Internal R&D is, however, not the source of innovation as firms may tap into knowledge necessary for innovation also through various types of sourcing agreements or by collaborating with other organizations. The objective of this study is to analyze the way firms go about organizing efficiently their innovation boundaries. Within this context, the analysis is focused, firstly, on the relation between innovation boundaries and firm innovation performance and, secondly, on the factors explaining innovation boundary organization. The innovation literature recognizes that the sources of innovation depend on the nature of technology but does not offer a sufficient tool for analyzing innovation boundary options and their efficiency. Thus, this study suggests incorporating insights from transaction cost economics (TCE) complemented with dynamic governance costs and benefits into the analysis. The thesis consists of two parts. The first part introduces the background of the study, research objectives, an overview of the empirical studies, and the general conclusions of the study. The second part is formed of five publications. The overall results firstly indicate that although the relation between firm innovation boundary options is partly industry sector-specific, the firm level search strategies and knowledge transfer capabilities are important for innovation performance independently of the sector. Secondly, the results show that the attributes suggested by TCE alone do not offer a sufficient explanation of innovation boundary selection, especially under conditions of high levels of (radical) uncertainty. Based on the results, the dynamic governance cost and benefit framework complements the static TCE when firm innovation boundaries are scrutinized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study is to examine macroeconomic indicators‟ and technical analysis‟ ability to signal market crashes. Indicators examined were Yield Spread, The Purchasing Managers Index and the Consumer Confidence Index. Technical Analysis indicators were moving average, Moving Average Convergence-Divergence and Relative Strength Index. We studied if commonly used macroeconomic indicators can be used as a warning system for a stock market crashes as well. The hypothesis is that the signals of recession can be used as signals of stock market crash and that way a basis for a hedging strategy. The data is collected from the U.S. markets from the years 1983-2010. Empirical studies show that macroeconomic indicators have been able to explain the future GDP development in the U.S. in research period and they were statistically significant. A hedging strategy that combined the signals of yield spread and Consumer Confidence Index gave most useful results as a basis of a hedging strategy in selected time period. It was able to outperform buy-and-hold strategy as well as all of the technical indicator based hedging strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is devoted to the analysis of signal variation of the Cross-Direction and Machine-Direction measurements from paper web. The data that we possess comes from the real paper machine. Goal of the work is to reconstruct the basis weight structure of the paper and to predict its behaviour to the future. The resulting synthetic data is needed for simulation of paper web. The main idea that we used for describing the basis weight variation in the Cross-Direction is Empirical Orthogonal Functions (EOF) algorithm, which is closely related to Principal Component Analysis (PCA) method. Signal forecasting in time is based on Time-Series analysis. Two principal mathematical procedures that we used in the work are Autoregressive-Moving Average (ARMA) modelling and Ornstein–Uhlenbeck (OU) process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this academic economic geographical dissertation is to study and describe how competitiveness in the Finnish paper industry has developed during 2001–2008. During these years, the Finnish paper industry has faced economically challenging times. This dissertation attempts to fill the existing gap between theoretical and empirical discussions concerning economic geographical issues in the paper industry. The main research questions are: How have the supply chain costs and margins developed during 2001–2008? How do sales prices, transportation, and fixed and variable costs correlate with gross margins in a spatial context? The research object for this case study is a typical large Finnish paper mill that exports over 90 % of its production. The economic longitudinal research data were obtained from the case mill’s controlled economic system and, correlation (R2) analysis was used as the main research method. The time series data cover monthly economic and manufacturing observations from the mill from 2001 to 2008. The study reveals the development of prices, costs and transportation in the case mill, and it shows how economic variables correlate with the paper mills’ gross margins in various markets in Europe. The research methods of economic geography offer perspectives that pay attention to the spatial (market) heterogeneity. This type of research has been quite scarce in the research tradition of Finnish economic geography and supply chain management. This case study gives new insight into the research tradition of Finnish economic geography and supply chain management and its applications. As a concrete empirical result, this dissertation states that the competitive advantages of the Finnish paper industry were significantly weakened during 2001–2008 by low paper prices, costly manufacturing and expensive transportation. Statistical analysis expose that, in several important markets, transport costs lower gross margins as much as decreasing paper prices, which was a new finding. Paper companies should continuously pay attention to lowering manufacturing and transporting costs to achieve more profitable economic performance. The location of a mill being far from markets clearly has an economic impact on paper manufacturing, as paper demand is decreasing and oversupply is pressuring paper prices down. Therefore, market and economic forecasting in the paper industry is advantageous at the country and product levels while simultaneously taking into account the economic geographically specific dimensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research proposes a methodology for assessing broiler breeder response to changes in rearing thermal environment. The continuous video recording of a flock analyzed may offer compelling evidences of thermal comfort, as well as other indications of welfare. An algorithm for classifying specific broiler breeder behavior was developed. Videos were recorded over three boxes where 30 breeders were reared. The boxes were mounted inside an environmental chamber were ambient temperature varied from cold to hot. Digital images were processed based on the number of pixels, according to their light intensity variation and binary contrast allowing a sequence of behaviors related to welfare. The system used the default of x, y coordinates, where x represents the horizontal distance from the top left of the work area to the point P, and y is the vertical distance. The video images were observed, and a grid was developed for identifying the area the birds stayed and the time they spent at that place. The sequence was analyzed frame by frame confronting the data with specific adopted thermal neutral rearing standards. The grid mask overlapped the real bird image. The resulting image allows the visualization of clusters, as birds in flock behave in certain patterns. An algorithm indicating the breeder response to thermal environment was developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evapotranspiration is the process of water loss of vegetated soil due to evaporation and transpiration, and it may be estimated by various empirical methods. This study had the objective to carry out the evaluation of the performance of the following methods: Blaney-Criddle, Jensen-Haise, Linacre, Solar Radiation, Hargreaves-Samani, Makkink, Thornthwaite, Camargo, Priestley-Taylor and Original Penman in the estimation of the potential evapotranspiration when compared to the Penman-Monteith standard method (FAO56) to the climatic conditions of Uberaba, state of Minas Gerais, Brazil. A set of 21 years monthly data (1990 to 2010) was used, working with the climatic elements: temperature, relative humidity, wind speed and insolation. The empirical methods to estimate reference evapotranspiration were compared with the standard method using linear regression, simple statistical analysis, Willmott agreement index (d) and performance index (c). The methods Makkink and Camargo showed the best performance, with "c" values ​​of 0.75 and 0.66, respectively. The Hargreaves-Samani method presented a better linear relation with the standard method, with a correlation coefficient (r) of 0.88.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is presented a software developed with Delphi programming language to compute the reservoir's annual regulated active storage, based on the sequent-peak algorithm. Mathematical models used for that purpose generally require extended hydrological series. Usually, the analysis of those series is performed with spreadsheets or graphical representations. Based on that, it was developed a software for calculation of reservoir active capacity. An example calculation is shown by 30-years (from 1977 to 2009) monthly mean flow historical data, from Corrente River, located at São Francisco River Basin, Brazil. As an additional tool, an interface was developed to manage water resources, helping to manipulate data and to point out information that it would be of interest to the user. Moreover, with that interface irrigation districts where water consumption is higher can be analyzed as a function of specific seasonal water demands situations. From a practical application, it is possible to conclude that the program provides the calculation originally proposed. It was designed to keep information organized and retrievable at any time, and to show simulation on seasonal water demands throughout the year, contributing with the elements of study concerning reservoir projects. This program, with its functionality, is an important tool for decision making in the water resources management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates the effectiveness of time-varying hedging during the financial crisis of 2007 and the European Debt Crisis of 2010. In addition, the seven test economies are part of the European Monetary Union and these countries are in different economical states. Time-varying hedge ratio was constructed using conditional variances and correlations, which were created by using multivariate GARCH models. Here we have used three different underlying portfolios: national equity markets, government bond markets and the combination of these two. These underlying portfolios were hedged by using credit default swaps. Empirical part includes the in-sample and out-of-sample analysis, which are constructed by using constant and dynamic models. Moreover, almost in every case dynamic models outperform the constant ones in the determination of the hedge ratio. We could not find any statistically significant evidence to support the use of asymmetric dynamic conditional correlation model. In addition, our findings are in line with prior literature and support the use of time-varying hedge ratio. Finally, we found that in some cases credit default swaps are not suitable instruments for hedging and they act more as a speculative instrument.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theoretical research of the study concentrated on finding theoretical frameworks to optimize the amount of needed stock keeping units (SKUs) in manufacturing industry. The goal was to find ways for a company to acquire an optimal collection of stock keeping units needed for manufacturing needed amount of end products. The research follows constructive research approach leaning towards practical problem solving. In the empirical part of this study, a recipe search tool was developed to an existing database used in the target company. The purpose of the tools was to find all the recipes meeting the EUPS performance standard and put the recipes in a ranking order using the data available in the database. The ranking of the recipes was formed from the combination of the performance measures and price of the recipes. In addition, the tool researched what kind of paper SKUs were needed to manufacture the best performing recipes. The tool developed during this process meets the requirements. It eases and makes it much faster to search for all the recipes meeting the EUPS standard. Furthermore, many future development possibilities for the tool were discovered while writing the thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this thesis is studying knowledge retention mechanisms used in cases of single experts’ leaving in the case company, analyzing the reason for the mechanisms choice and successfulness of knowledge retention process depending of that choice. The theoretical part discusses the origins of knowledge retention processes in the theoretical studies, the existing knowledge retention mechanisms and practical issues of their implementation. The empirical part of the study is designed as employees’ interview with later discussion of the findings. The empirical findings indicate the following reasons for knowledge retention mechanisms choice: type of knowledge retained, specialty of leaving experts and time and distance issues of a particular case. The following factors influenced the success of a retention process: choice of knowledge retention mechanisms, usage of combination of mechanisms and creation of knowledge retention plans. The results might be useful for those interested in factors influencing knowledge retention processes in cases of experts’ departure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By coupling the Boundary Element Method (BEM) and the Finite Element Method (FEM) an algorithm that combines the advantages of both numerical processes is developed. The main aim of the work concerns the time domain analysis of general three-dimensional wave propagation problems in elastic media. In addition, mathematical and numerical aspects of the related BE-, FE- and BE/FE-formulations are discussed. The coupling algorithm allows investigations of elastodynamic problems with a BE- and a FE-subdomain. In order to observe the performance of the coupling algorithm two problems are solved and their results compared to other numerical solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper applies the Multi-Harmonic Nonlinear Receptance Coupling Approach (MUHANORCA) (Ferreira 1998) to evaluate the frequency response characteristics of a beam which is clamped at one end and supported at the other end by a nonlinear cubic stiffness joint. In order to apply the substructure coupling technique, the problem was characterised by coupling a clamped linear beam with a nonlinear cubic stiffness joint. The experimental results were obtained by a sinusoidal excitation with a special force control algorithm where the level of the fundamental force is kept constant and the level of the harmonics is kept zero for all the frequencies measured.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a one-dimensional, semi-empirical dynamic model for the simulation and analysis of a calcium looping process for post-combustion CO2 capture. Reduction of greenhouse emissions from fossil fuel power production requires rapid actions including the development of efficient carbon capture and sequestration technologies. The development of new carbon capture technologies can be expedited by using modelling tools. Techno-economical evaluation of new capture processes can be done quickly and cost-effectively with computational models before building expensive pilot plants. Post-combustion calcium looping is a developing carbon capture process which utilizes fluidized bed technology with lime as a sorbent. The main objective of this work was to analyse the technological feasibility of the calcium looping process at different scales with a computational model. A one-dimensional dynamic model was applied to the calcium looping process, simulating the behaviour of the interconnected circulating fluidized bed reactors. The model incorporates fundamental mass and energy balance solvers to semi-empirical models describing solid behaviour in a circulating fluidized bed and chemical reactions occurring in the calcium loop. In addition, fluidized bed combustion, heat transfer and core-wall layer effects were modelled. The calcium looping model framework was successfully applied to a 30 kWth laboratory scale and a pilot scale unit 1.7 MWth and used to design a conceptual 250 MWth industrial scale unit. Valuable information was gathered from the behaviour of a small scale laboratory device. In addition, the interconnected behaviour of pilot plant reactors and the effect of solid fluidization on the thermal and carbon dioxide balances of the system were analysed. The scale-up study provided practical information on the thermal design of an industrial sized unit, selection of particle size and operability in different load scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis examines the application of data envelopment analysis as an equity portfolio selection criterion in the Finnish stock market during period 2001-2011. A sample of publicly traded firms in the Helsinki Stock Exchange is examined in this thesis. The sample covers the majority of the publicly traded firms in the Helsinki Stock Exchange. Data envelopment analysis is used to determine the efficiency of firms using a set of input and output financial parameters. The set of financial parameters consist of asset utilization, liquidity, capital structure, growth, valuation and profitability measures. The firms are divided into artificial industry categories, because of the industry-specific nature of the input and output parameters. Comparable portfolios are formed inside the industry category according to the efficiency scores given by the DEA and the performance of the portfolios is evaluated with several measures. The empirical evidence of this thesis suggests that with certain limitations, data envelopment analysis can successfully be used as portfolio selection criterion in the Finnish stock market when the portfolios are rebalanced at annual frequency according to the efficiency scores given by the data envelopment analysis. However, when the portfolios were rebalanced every two or three years, the results are mixed and inconclusive.