928 resultados para Input-output analysis--Ireland
Resumo:
The fuzzy logic admits infinite intermediate logical values between false and true. With this principle, it developed in this study a system based on fuzzy rules, which indicates the body mass index of ruminant animals in order to obtain the best time to slaughter. The controller developed has as input the variables weight and height, and as output a new body mass index, called Fuzzy Body Mass Index (Fuzzy BMI), which may serve as a detection system at the time of livestock slaughtering, comparing one another by the linguistic variables "Very Low", "Low", "Average ", "High" and "Very High". For demonstrating the use application of this fuzzy system, an analysis was made with 147 Nellore beeves to determine Fuzzy BMI values for each animal and indicate the location of body mass of any herd. The performance validation of the system was based on a statistical analysis using the Pearson correlation coefficient of 0.923, representing a high positive correlation, indicating that the proposed method is appropriate. Thus, this method allows the evaluation of the herd comparing each animal within the group, thus providing a quantitative method of farmer decision. It was concluded that this study established a computational method based on fuzzy logic that mimics part of human reasoning and interprets the body mass index of any bovine species and in any region of the country.
Resumo:
The present study aimed at evaluating the use of Artificial Neural Network to correlate the values resulting from chemical analyses of samples of coffee with the values of their sensory analyses. The coffee samples used were from the Coffea arabica L., cultivars Acaiá do Cerrado, Topázio, Acaiá 474-19 and Bourbon, collected in the southern region of the state of Minas Gerais. The chemical analyses were carried out for reducing and non-reducing sugars. The quality of the beverage was evaluated by sensory analysis. The Artificial Neural Network method used values from chemical analyses as input variables and values from sensory analysis as output values. The multiple linear regression of sensory analysis values, according to the values from chemical analyses, presented a determination coefficient of 0.3106, while the Artificial Neural Network achieved a level of 80.00% of success in the classification of values from the sensory analysis.
Resumo:
Over the past decade, organizations worldwide have begun to widely adopt agile software development practices, which offer greater flexibility to frequently changing business requirements, better cost effectiveness due to minimization of waste, faster time-to-market, and closer collaboration between business and IT. At the same time, IT services are continuing to be increasingly outsourced to third parties providing the organizations with the ability to focus on their core capabilities as well as to take advantage of better demand scalability, access to specialized skills, and cost benefits. An output-based pricing model, where the customers pay directly for the functionality that was delivered rather than the effort spent, is quickly becoming a new trend in IT outsourcing allowing to transfer the risk away from the customer while at the same time offering much better incentives for the supplier to optimize processes and improve efficiency, and consequently producing a true win-win outcome. Despite the widespread adoption of both agile practices and output-based outsourcing, there is little formal research available on how the two can be effectively combined in practice. Moreover, little practical guidance exists on how companies can measure the performance of their agile projects, which are being delivered in an output-based outsourced environment. This research attempted to shed light on this issue by developing a practical project monitoring framework which may be readily applied by organizations to monitor the performance of agile projects in an output-based outsourcing context, thus taking advantage of the combined benefits of such an arrangement Modified from action research approach, this research was divided into two cycles, each consisting of the Identification, Analysis, Verification, and Conclusion phases. During Cycle 1, a list of six Key Performance Indicators (KPIs) was proposed and accepted by the professionals in the studied multinational organization, which formed the core of the proposed framework and answered the first research sub-question of what needs to be measured. In Cycle 2, a more in-depth analysis was provided for each of the suggested Key Performance Indicators including the techniques for capturing, calculating, and evaluating the information provided by each KPI. In the course of Cycle 2, the second research sub-question was answered, clarifying how the data for each KPI needed to be measured, interpreted, and acted upon. Consequently, after two incremental research cycles, the primary research question was answered describing the practical framework that may be used for monitoring the performance of agile IT projects delivered in an output-based outsourcing context. This framework was evaluated by the professionals within the context of the studied organization and received positive feedback across all four evaluation criteria set forth in this research, including the low overhead of data collection, high value of provided information, ease of understandability of the metric dashboard, and high generalizability of the proposed framework.
Resumo:
It is known already from 1970´s that laser beam is suitable for processing paper materials. In this thesis, term paper materials mean all wood-fibre based materials, like dried pulp, copy paper, newspaper, cardboard, corrugated board, tissue paper etc. Accordingly, laser processing in this thesis means all laser treatments resulting material removal, like cutting, partial cutting, marking, creasing, perforation etc. that can be used to process paper materials. Laser technology provides many advantages for processing of paper materials: non-contact method, freedom of processing geometry, reliable technology for non-stop production etc. Especially packaging industry is very promising area for laser processing applications. However, there are only few industrial laser processing applications worldwide even in beginning of 2010´s. One reason for small-scale use of lasers in paper material manufacturing is that there is a shortage of published research and scientific articles. Another problem, restraining the use of laser for processing of paper materials, is colouration of paper material i.e. the yellowish and/or greyish colour of cut edge appearing during cutting or after cutting. These are the main reasons for selecting the topic of this thesis to concern characterization of interaction of laser beam and paper materials. This study was carried out in Laboratory of Laser Processing at Lappeenranta University of Technology (Finland). Laser equipment used in this study was TRUMPF TLF 2700 carbon dioxide laser that produces a beam with wavelength of 10.6 μm with power range of 190-2500 W (laser power on work piece). Study of laser beam and paper material interaction was carried out by treating dried kraft pulp (grammage of 67 g m-2) with different laser power levels, focal plane postion settings and interaction times. Interaction between laser beam and dried kraft pulp was detected with different monitoring devices, i.e. spectrometer, pyrometer and active illumination imaging system. This way it was possible to create an input and output parameter diagram and to study the effects of input and output parameters in this thesis. When interaction phenomena are understood also process development can be carried out and even new innovations developed. Fulfilling the lack of information on interaction phenomena can assist in the way of lasers for wider use of technology in paper making and converting industry. It was concluded in this thesis that interaction of laser beam and paper material has two mechanisms that are dependent on focal plane position range. Assumed interaction mechanism B appears in range of average focal plane position of 3.4 mm and 2.4 mm and assumed interaction mechanism A in range of average focal plane position of 0.4 mm and -0.6 mm both in used experimental set up. Focal plane position 1.4 mm represents midzone of these two mechanisms. Holes during laser beam and paper material interaction are formed gradually: first small hole is formed to interaction area in the centre of laser beam cross-section and after that, as function of interaction time, hole expands, until interaction between laser beam and dried kraft pulp is ended. By the image analysis it can be seen that in beginning of laser beam and dried kraft pulp material interaction small holes off very good quality are formed. It is obvious that black colour and heat affected zone appear as function of interaction time. This reveals that there still are different interaction phases within interaction mechanisms A and B. These interaction phases appear as function of time and also as function of peak intensity of laser beam. Limit peak intensity is the value that divides interaction mechanism A and B from one-phase interaction into dual-phase interaction. So all peak intensity values under limit peak intensity belong to MAOM (interaction mechanism A one-phase mode) or to MBOM (interaction mechanism B onephase mode) and values over that belong to MADM (interaction mechanism A dual-phase mode) or to MBDM (interaction mechanism B dual-phase mode). Decomposition process of cellulose is evolution of hydrocarbons when temperature is between 380- 500°C. This means that long cellulose molecule is split into smaller volatile hydrocarbons in this temperature range. As temperature increases, decomposition process of cellulose molecule changes. In range of 700-900°C, cellulose molecule is mainly decomposed into H2 gas; this is why this range is called evolution of hydrogen. Interaction in this range starts (as in range of MAOM and MBOM), when a small good quality hole is formed. This is due to “direct evaporation” of pulp via decomposition process of evolution of hydrogen. And this can be seen can be seen in spectrometer as high intensity peak of yellow light (in range of 588-589 nm) which refers to temperature of ~1750ºC. Pyrometer does not detect this high intensity peak since it is not able to detect physical phase change from solid kraft pulp to gaseous compounds. As interaction time between laser beam and dried kraft pulp continues, hypothesis is that three auto ignition processes occurs. Auto ignition of substance is the lowest temperature in which it will spontaneously ignite in a normal atmosphere without an external source of ignition, such as a flame or spark. Three auto ignition processes appears in range of MADM and MBDM, namely: 1. temperature of auto ignition of hydrogen atom (H2) is 500ºC, 2. temperature of auto ignition of carbon monoxide molecule (CO) is 609ºC and 3. temperature of auto ignition of carbon atom (C) is 700ºC. These three auto ignition processes leads to formation of plasma plume which has strong emission of radiation in range of visible light. Formation of this plasma plume can be seen as increase of intensity in wavelength range of ~475-652 nm. Pyrometer shows maximum temperature just after this ignition. This plasma plume is assumed to scatter laser beam so that it interacts with larger area of dried kraft pulp than what is actual area of beam cross-section. This assumed scattering reduces also peak intensity. So result shows that assumably scattered light with low peak intensity is interacting with large area of hole edges and due to low peak intensity this interaction happens in low temperature. So interaction between laser beam and dried kraft pulp turns from evolution of hydrogen to evolution of hydrocarbons. This leads to black colour of hole edges.
Resumo:
This work describes the methodology, basic procedures and instrumental employed by the Solar Energy Laboratory at Universidade Federal do Rio Grande do Sul for the determination of current-voltage characteristic curves of photovoltaic modules. According to this methodology, I-V characteristic curves were acquired for several modules under diverse conditions. The main electrical parameters were determined and the temperature and irradiance influence on photovoltaic modules performance was quantified. It was observed that most of the tested modules presented output power values considerably lower than those specified by the manufacturers. The described hardware allows the testing of modules with open-circuit voltage up to 50 V and short-circuit current up to 8 A.
Resumo:
Wind power is a low-carbon energy production form that reduces the dependence of society on fossil fuels. Finland has adopted wind energy production into its climate change mitigation policy, and that has lead to changes in legislation, guidelines, regional wind power areas allocation and establishing a feed-in tariff. Wind power production has indeed boosted in Finland after two decades of relatively slow growth, for instance from 2010 to 2011 wind energy production increased with 64 %, but there is still a long way to the national goal of 6 TWh by 2020. This thesis introduces a GIS-based decision-support methodology for the preliminary identification of suitable areas for wind energy production including estimation of their level of risk. The goal of this study was to define the least risky places for wind energy development within Kemiönsaari municipality in Southwest Finland. Spatial multicriteria decision analysis (SMCDA) has been used for searching suitable wind power areas along with many other location-allocation problems. SMCDA scrutinizes complex ill-structured decision problems in GIS environment using constraints and evaluation criteria, which are aggregated using weighted linear combination (WLC). Weights for the evaluation criteria were acquired using analytic hierarchy process (AHP) with nine expert interviews. Subsequently, feasible alternatives were ranked in order to provide a recommendation and finally, a sensitivity analysis was conducted for the determination of recommendation robustness. The first study aim was to scrutinize the suitability and necessity of existing data for this SMCDA study. Most of the available data sets were of sufficient resolution and quality. Input data necessity was evaluated qualitatively for each data set based on e.g. constraint coverage and attribute weights. Attribute quality was estimated mainly qualitatively by attribute comprehensiveness, operationality, measurability, completeness, decomposability, minimality and redundancy. The most significant quality issue was redundancy as interdependencies are not tolerated by WLC and AHP does not include measures to detect them. The third aim was to define the least risky areas for wind power development within the study area. The two highest ranking areas were Nordanå-Lövböle and Påvalsby followed by Helgeboda, Degerdal, Pungböle, Björkboda, and Östanå-Labböle. The fourth aim was to assess the recommendation reliability, and the top-ranking two areas proved robust whereas the other ones were more sensitive.
Resumo:
This doctoral thesis introduces an improved control principle for active du/dt output filtering in variable-speed AC drives, together with performance comparisons with previous filtering methods. The effects of power semiconductor nonlinearities on the output filtering performance are investigated. The nonlinearities include the timing deviation and the voltage pulse waveform distortion in the variable-speed AC drive output bridge. Active du/dt output filtering (ADUDT) is a method to mitigate motor overvoltages in variable-speed AC drives with long motor cables. It is a quite recent addition to the du/dt reduction methods available. This thesis improves on the existing control method for the filter, and concentrates on the lowvoltage (below 1 kV AC) two-level voltage-source inverter implementation of the method. The ADUDT uses narrow voltage pulses having a duration in the order of a microsecond from an IGBT (insulated gate bipolar transistor) inverter to control the output voltage of a tuned LC filter circuit. The filter output voltage has thus increased slope transition times at the rising and falling edges, with an opportunity of no overshoot. The effect of the longer slope transition times is a reduction in the du/dt of the voltage fed to the motor cable. Lower du/dt values result in a reduction in the overvoltage effects on the motor terminals. Compared with traditional output filtering methods to accomplish this task, the active du/dt filtering provides lower inductance values and a smaller physical size of the filter itself. The filter circuit weight can also be reduced. However, the power semiconductor nonlinearities skew the filter control pulse pattern, resulting in control deviation. This deviation introduces unwanted overshoot and resonance in the filter. The controlmethod proposed in this thesis is able to directly compensate for the dead time-induced zero-current clamping (ZCC) effect in the pulse pattern. It gives more flexibility to the pattern structure, which could help in the timing deviation compensation design. Previous studies have shown that when a motor load current flows in the filter circuit and the inverter, the phase leg blanking times distort the voltage pulse sequence fed to the filter input. These blanking times are caused by excessively large dead time values between the IGBT control pulses. Moreover, the various switching timing distortions, present in realworld electronics when operating with a microsecond timescale, bring additional skew to the control. Left uncompensated, this results in distortion of the filter input voltage and a filter self-induced overvoltage in the form of an overshoot. This overshoot adds to the voltage appearing at the motor terminals, thus increasing the transient voltage amplitude at the motor. This doctoral thesis investigates the magnitude of such timing deviation effects. If the motor load current is left uncompensated in the control, the filter output voltage can overshoot up to double the input voltage amplitude. IGBT nonlinearities were observed to cause a smaller overshoot, in the order of 30%. This thesis introduces an improved ADUDT control method that is able to compensate for phase leg blanking times, giving flexibility to the pulse pattern structure and dead times. The control method is still sensitive to timing deviations, and their effect is investigated. A simple approach of using a fixed delay compensation value was tried in the test setup measurements. The ADUDT method with the new control algorithm was found to work in an actual motor drive application. Judging by the simulation results, with the delay compensation, the method should ultimately enable an output voltage performance and a du/dt reduction that are free from residual overshoot effects. The proposed control algorithm is not strictly required for successful ADUDT operation: It is possible to precalculate the pulse patterns by iteration and then for instance store them into a look-up table inside the control electronics. Rather, the newly developed control method is a mathematical tool for solving the ADUDT control pulses. It does not contain the timing deviation compensation (from the logic-level command to the phase leg output voltage), and as such is not able to remove the timing deviation effects that cause error and overshoot in the filter. When the timing deviation compensation has to be tuned-in in the control pattern, the precalculated iteration method could prove simpler and equally good (or even better) compared with the mathematical solution with a separate timing compensation module. One of the key findings in this thesis is the conclusion that the correctness of the pulse pattern structure, in the sense of ZCC and predicted pulse timings, cannot be separated from the timing deviations. The usefulness of the correctly calculated pattern is reduced by the voltage edge timing errors. The doctoral thesis provides an introductory background chapter on variable-speed AC drives and the problem of motor overvoltages and takes a look at traditional solutions for overvoltage mitigation. Previous results related to the active du/dt filtering are discussed. The basic operation principle and design of the filter have been studied previously. The effect of load current in the filter and the basic idea of compensation have been presented in the past. However, there was no direct way of including the dead time in the control (except for solving the pulse pattern manually by iteration), and the magnitude of nonlinearity effects had not been investigated. The enhanced control principle with the dead time handling capability and a case study of the test setup timing deviations are the main contributions of this doctoral thesis. The simulation and experimental setup results show that the proposed control method can be used in an actual drive. Loss measurements and a comparison of active du/dt output filtering with traditional output filtering methods are also presented in the work. Two different ADUDT filter designs are included, with ferrite core and air core inductors. Other filters included in the tests were a passive du/dtfilter and a passive sine filter. The loss measurements incorporated a silicon carbide diode-equipped IGBT module, and the results show lower losses with these new device technologies. The new control principle was measured in a 43 A load current motor drive system and was able to bring the filter output peak voltage from 980 V (the previous control principle) down to 680 V in a 540 V average DC link voltage variable-speed drive. A 200 m motor cable was used, and the filter losses for the active du/dt methods were 111W–126 W versus 184 W for the passive du/dt. In terms of inverter and filter losses, the active du/dt filtering method had a 1.82-fold increase in losses compared with an all-passive traditional du/dt output filter. The filter mass with the active du/dt method was 17% (2.4 kg, air-core inductors) compared with 14 kg of the passive du/dt method filter. Silicon carbide freewheeling diodes were found to reduce the inverter losses in the active du/dt filtering by 18% compared with the same IGBT module with silicon diodes. For a 200 m cable length, the average peak voltage at the motor terminals was 1050 V with no filter, 960 V for the all-passive du/dt filter, and 700 V for the active du/dt filtering applying the new control principle.
Resumo:
In the present study, using noise-free simulated signals, we performed a comparative examination of several preprocessing techniques that are used to transform the cardiac event series in a regularly sampled time series, appropriate for spectral analysis of heart rhythm variability (HRV). First, a group of noise-free simulated point event series, which represents a time series of heartbeats, was generated by an integral pulse frequency modulation model. In order to evaluate the performance of the preprocessing methods, the differences between the spectra of the preprocessed simulated signals and the true spectrum (spectrum of the model input modulating signals) were surveyed by visual analysis and by contrasting merit indices. It is desired that estimated spectra match the true spectrum as close as possible, showing a minimum of harmonic components and other artifacts. The merit indices proposed to quantify these mismatches were the leakage rate, defined as a measure of leakage components (located outside some narrow windows centered at frequencies of model input modulating signals) with respect to the whole spectral components, and the numbers of leakage components with amplitudes greater than 1%, 5% and 10% of the total spectral components. Our data, obtained from a noise-free simulation, indicate that the utilization of heart rate values instead of heart period values in the derivation of signals representative of heart rhythm results in more accurate spectra. Furthermore, our data support the efficiency of the widely used preprocessing technique based on the convolution of inverse interval function values with a rectangular window, and suggest the preprocessing technique based on a cubic polynomial interpolation of inverse interval function values and succeeding spectral analysis as another efficient and fast method for the analysis of HRV signals
Resumo:
Switching power supplies are usually implemented with a control circuitry that uses constant clock frequency turning the power semiconductor switches on and off. A drawback of this customary operating principle is that the switching frequency and harmonic frequencies are present in both the conducted and radiated EMI spectrum of the power converter. Various variable-frequency techniques have been introduced during the last decade to overcome the EMC problem. The main objective of this study was to compare the EMI and steady-state performance of a switch mode power supply with different spread-spectrum/variable-frequency methods. Another goal was to find out suitable tools for the variable-frequency EMI analysis. This thesis can be divided into three main parts: Firstly, some aspects of spectral estimation and measurement are presented. Secondly, selected spread spectrum generation techniques are presented with simulations and background information. Finally, simulations and prototype measurements from the EMC and the steady-state performance are carried out in the last part of this work. Combination of the autocorrelation function, the Welch spectrum estimate and the spectrogram were used as a substitute for ordinary Fourier methods in the EMC analysis. It was also shown that the switching function can be used in preliminary EMC analysis of a SMPS and the spectrum and autocorrelation sequence of a switching function correlates with the final EMI spectrum. This work is based on numerous simulations and measurements made with the prototype. All these simulations and measurements are made with the boost DC/DC converter. Four different variable-frequency modulation techniques in six different configurations were analyzed and the EMI performance was compared to the constant frequency operation. Output voltage and input current waveforms were also analyzed in time domain to see the effect of the spread spectrum operation on these quantities. According to the results presented in this work, spread spectrum modulation can be utilized in power converter for EMI mitigation. The results from steady-state voltage measurements show, that the variable-frequency operation of the SMPS has effect on the voltage ripple, but the ripple measured from the prototype is still acceptable in some applications. Both current and voltage ripple can be controlled with proper main circuit and controller design.
Resumo:
An appropriate supplier selection and its profound effects on increasing the competitive advantage of companies has been widely discussed in supply chain management (SCM) literature. By raising environmental awareness among companies and industries they attach more importance to sustainable and green activities in selection procedures of raw material providers. The current thesis benefits from data envelopment analysis (DEA) technique to evaluate the relative efficiency of suppliers in the presence of carbon dioxide (CO2) emission for green supplier selection. We incorporate the pollution of suppliers as an undesirable output into DEA. However, to do so, two conventional DEA model problems arise: the lack of the discrimination power among decision making units (DMUs) and flexibility of the inputs and outputs weights. To overcome these limitations, we use multiple criteria DEA (MCDEA) as one alternative. By applying MCDEA the number of suppliers which are identified as efficient will be decreased and will lead to a better ranking and selection of the suppliers. Besides, in order to compare the performance of the suppliers with an ideal supplier, a “virtual” best practice supplier is introduced. The presence of the ideal virtual supplier will also increase the discrimination power of the model for a better ranking of the suppliers. Therefore, a new MCDEA model is proposed to simultaneously handle undesirable outputs and virtual DMU. The developed model is applied for green supplier selection problem. A numerical example illustrates the applicability of the proposed model.
Resumo:
Almost every problem of design, planning and management in the technical and organizational systems has several conflicting goals or interests. Nowadays, multicriteria decision models represent a rapidly developing area of operation research. While solving practical optimization problems, it is necessary to take into account various kinds of uncertainty due to lack of data, inadequacy of mathematical models to real-time processes, calculation errors, etc. In practice, this uncertainty usually leads to undesirable outcomes where the solutions are very sensitive to any changes in the input parameters. An example is the investment managing. Stability analysis of multicriteria discrete optimization problems investigates how the found solutions behave in response to changes in the initial data (input parameters). This thesis is devoted to the stability analysis in the problem of selecting investment project portfolios, which are optimized by considering different types of risk and efficiency of the investment projects. The stability analysis is carried out in two approaches: qualitative and quantitative. The qualitative approach describes the behavior of solutions in conditions with small perturbations in the initial data. The stability of solutions is defined in terms of existence a neighborhood in the initial data space. Any perturbed problem from this neighborhood has stability with respect to the set of efficient solutions of the initial problem. The other approach in the stability analysis studies quantitative measures such as stability radius. This approach gives information about the limits of perturbations in the input parameters, which do not lead to changes in the set of efficient solutions. In present thesis several results were obtained including attainable bounds for the stability radii of Pareto optimal and lexicographically optimal portfolios of the investment problem with Savage's, Wald's criteria and criteria of extreme optimism. In addition, special classes of the problem when the stability radii are expressed by the formulae were indicated. Investigations were completed using different combinations of Chebyshev's, Manhattan and Hölder's metrics, which allowed monitoring input parameters perturbations differently.
Resumo:
Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.
Resumo:
This study discusses the importance of learning through the process of exporting, and more specifically how such a process can enhance the product innovativeness of a company. The purpose of this study is to investigate the appropriate sources of learning and to suggest an interactive framework for how new knowledge from exporting markets can materialize itself into product innovation. The theoretical background of the study was constructed from academic literature, which is related to concepts of learning by exporting, along with sources for learning in the market and new product development. The empirical research in the form of a qualitative case study was based on four semi-structured interviews and secondary data from the case company official site. The interview data was collected between March and April 2015 from case company employees who directly work in the department of exporting and product development. The method of thematic analysis was used to categorize and interpret the collected data. What was conclusively discovered, was that the knowledge from an exporting market can be an incentive for product innovation, especially an incremental one. Foreign customers and competitors as important sources for new knowledge contribute to the innovative process. Foreign market competitors’ influence on product improvements was high only when the competitor was a market leader or held a colossal market share, while the customers’ influence is always high. Therefore, involving a foreign customer in the development of a new product is vital to a company that is interested in benefiting from what is learned through exporting. The interactive framework, which is based on the theoretical background and findings of the study, suggests that exporting companies can raise their product innovativeness by utilizing newly gained knowledge from exporting markets. Except for input, in the form of sources of learning, and product innovation as an output, the framework contains a process of knowledge transfer, the absorptive capacity of a firm and a new product development process. In addition, the framework and the findings enhance the understanding of the disputed relationship between an exporting experience and product innovation. However, future research is needed in order to fully understand all the elements of the framework, such as the absorptive capacity of a firm as well as more case companies to be processed in order to increase the generalization of the framework