32 resultados para Non-response model approach
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Tiivistelmä: Pituusboniteettisovellus ojitusalueiden metsille
Resumo:
Prediction of the stock market valuation is a common interest to all market participants. Theoretically sound market valuation can be achieved by discounting future earnings of equities to present. Competing valuation models seek to find variables that affect the equity market valuation in a way that the market valuation can be explained and also variables that could be used to predict market valuation. In this paper we test the contemporaneous relationship between stock prices, forward looking earnings and long-term government bond yields. We test this so-called Fed model in a long- and short-term time series analysis. In order to test the dynamics of the relationship, we use the cointegration framework. The data used in this study spans over four decades of various market conditions between 1964-2007, using data from United States. The empirical results of our analysis do not give support for the Fed model. We are able to show that the long-term government bonds do not play statistically significant role in this relationship. The effect of forward earnings yield on the stock market prices is significant and thus we suggest the use of standard valuation ratios when trying to predict the future paths of equity prices. Also, changes in the long-term government bond yields do not have significant short-term impact on stock prices.
Resumo:
The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.
Resumo:
The transport of macromolecules, such as low-density lipoprotein (LDL), and their accumulation in the layers of the arterial wall play a critical role in the creation and development of atherosclerosis. Atherosclerosis is a disease of large arteries e.g., the aorta, coronary, carotid, and other proximal arteries that involves a distinctive accumulation of LDL and other lipid-bearing materials in the arterial wall. Over time, plaque hardens and narrows the arteries. The flow of oxygen-rich blood to organs and other parts of the body is reduced. This can lead to serious problems, including heart attack, stroke, or even death. It has been proven that the accumulation of macromolecules in the arterial wall depends not only on the ease with which materials enter the wall, but also on the hindrance to the passage of materials out of the wall posed by underlying layers. Therefore, attention was drawn to the fact that the wall structure of large arteries is different than other vessels which are disease-resistant. Atherosclerosis tends to be localized in regions of curvature and branching in arteries where fluid shear stress (shear rate) and other fluid mechanical characteristics deviate from their normal spatial and temporal distribution patterns in straight vessels. On the other hand, the smooth muscle cells (SMCs) residing in the media layer of the arterial wall respond to mechanical stimuli, such as shear stress. Shear stress may affect SMC proliferation and migration from the media layer to intima. This occurs in atherosclerosis and intimal hyperplasia. The study of blood flow and other body fluids and of heat transport through the arterial wall is one of the advanced applications of porous media in recent years. The arterial wall may be modeled in both macroscopic (as a continuous porous medium) and microscopic scales (as a heterogeneous porous medium). In the present study, the governing equations of mass, heat and momentum transport have been solved for different species and interstitial fluid within the arterial wall by means of computational fluid dynamics (CFD). Simulation models are based on the finite element (FE) and finite volume (FV) methods. The wall structure has been modeled by assuming the wall layers as porous media with different properties. In order to study the heat transport through human tissues, the simulations have been carried out for a non-homogeneous model of porous media. The tissue is composed of blood vessels, cells, and an interstitium. The interstitium consists of interstitial fluid and extracellular fibers. Numerical simulations are performed in a two-dimensional (2D) model to realize the effect of the shape and configuration of the discrete phase on the convective and conductive features of heat transfer, e.g. the interstitium of biological tissues. On the other hand, the governing equations of momentum and mass transport have been solved in the heterogeneous porous media model of the media layer, which has a major role in the transport and accumulation of solutes across the arterial wall. The transport of Adenosine 5´-triphosphate (ATP) is simulated across the media layer as a benchmark to observe how SMCs affect on the species mass transport. In addition, the transport of interstitial fluid has been simulated while the deformation of the media layer (due to high blood pressure) and its constituents such as SMCs are also involved in the model. In this context, the effect of pressure variation on shear stress is investigated over SMCs induced by the interstitial flow both in 2D and three-dimensional (3D) geometries for the media layer. The influence of hypertension (high pressure) on the transport of lowdensity lipoprotein (LDL) through deformable arterial wall layers is also studied. This is due to the pressure-driven convective flow across the arterial wall. The intima and media layers are assumed as homogeneous porous media. The results of the present study reveal that ATP concentration over the surface of SMCs and within the bulk of the media layer is significantly dependent on the distribution of cells. Moreover, the shear stress magnitude and distribution over the SMC surface are affected by transmural pressure and the deformation of the media layer of the aorta wall. This work reflects the fact that the second or even subsequent layers of SMCs may bear shear stresses of the same order of magnitude as the first layer does if cells are arranged in an arbitrary manner. This study has brought new insights into the simulation of the arterial wall, as the previous simplifications have been ignored. The configurations of SMCs used here with elliptic cross sections of SMCs closely resemble the physiological conditions of cells. Moreover, the deformation of SMCs with high transmural pressure which follows the media layer compaction has been studied for the first time. On the other hand, results demonstrate that LDL concentration through the intima and media layers changes significantly as wall layers compress with transmural pressure. It was also noticed that the fraction of leaky junctions across the endothelial cells and the area fraction of fenestral pores over the internal elastic lamina affect the LDL distribution dramatically through the thoracic aorta wall. The simulation techniques introduced in this work can also trigger new ideas for simulating porous media involved in any biomedical, biomechanical, chemical, and environmental engineering applications.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
The purpose of this thesis is to study factors that explain the bilateral fiber trade flows. This is done by analyzing bilateral trade flows during 1990-2006. It will be studied also, whether there are differences between fiber types. This thesis uses a gravity model approach to study the trade flows. Gravity model is mostly used to study the aggregate data between trading countries. In this thesis the gravity model is applied to single fibers. This model is then applied to panel data set. Results from the regression show clearly that there are benefits in studying different fibers in separate. The effects differ considerably from each other. Furthermore, this thesis speaks for the existence of Linder’s effect in certain fiber types.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
Tässä työssä tutkitaan ohjelmistoarkkitehtuurisuunnitteluominaisuuksien vaikutusta erään client-server –arkkitehtuuriin perustuvan mobiilipalvelusovelluksen suunnittelu- ja toteutusaikaan. Kyseinen tutkimus perustuu reaalielämän projektiin, jonka kvalitatiivinen analyysi paljasti arkkitehtuurikompponenttien välisten kytkentöjen merkittävästi vaikuttavan projektin työmäärään. Työn päätavoite oli kvantitatiivisesti tutkia yllä mainitun havainnon oikeellisuus. Tavoitteen saavuttamiseksi suunniteltiin ohjelmistoarkkitehtuurisuunnittelun mittaristo kuvaamaan kyseisen järjestelmän alijärjestelmien arkkitehtuuria ja luotiin kaksi suunniteltua mittaristoa käyttävää, työmäärää (komponentin suunnittelu-, toteutus- ja testausaikojen summa) arvioivaa mallia, joista toinen on lineaarinen ja toinen epälineaarinen. Näiden mallien kertoimet sovitettiin optimoimalla niiden arvot epälineaarista gloobaalioptimointimenetelmää, differentiaalievoluutioalgoritmia, käyttäen, niin että mallien antamat arvot vastasivat parhaiten mitattua työmäärää sekä kaikilla ominaisuuksilla eli attribuuteilla että vain osalla niistä (yksi jätettiin vuorotellen pois). Kun arkkitehtuurikompenttien väliset kytkennät jätettiin malleista pois, mitattujen ja arvoitujen työmäärien välinen ero (ilmaistuna virheenä) kasvoi eräässä tapauksessa 367 % entisestä tarkoittaen sitä, että näin muodostettu malli vastasi toteutusaikoja huonosti annetulla ainestolla. Tämä oli suurin havaitu virhe kaikkien poisjätettyjen ominaisuuksien kesken. Saadun tuloksen perusteella päätettiin, että kyseisen järjestelmän toteutusajat ovat vahvasti riippuvaisia kytkentöjen määrästä, ja näin ollen kytkentöjen määrä oli mitä todennäköisemmin kaikista tärkein työmäärään vaikuttava tekijä tutkitun järjestelmän arkkitehtuurisuunnittelussa.
Resumo:
The purpose of this study was to improve PM7’s basis weight CD profile in Stora Enso’s Berghuizer mill and to search mechanical defects which affect to the formation of the basis weight CD profile. In the theoretical part PM7’s structure was presented and the formation of the basis weight and caliper CD profiles was examined as well as disturbances which are affecting to the formation. The function of the control system was scrutinised for the side of CD profiles as well as the formation of the measured CD profiles. Tuning of the control system was examined through the response model and filtering. Specification of the response model and filtering was explained and how to determine 2sigma statistical number. In the end of the theoretical part ATPA hardware and a new profile browser were introduced. In the experimental part focus was in the beginning to search and remove mechanical defects which are affecting to CD profiles. The next step was to verify the reliability of the online measurements, to study the stability of the basis weight CD profile and to find out so called fingerprint, a basis weight CD profile which is unique for each paper machine. New response model and filtering value for basis weight CD profile was determined by bump tests. After a follow up period the affect of the new response model and filtering was analysed.
Resumo:
Julkaisussa tarkastellaan syksyllä 2014 kerättyä Suomi 2014 – kulutus ja elämäntapa -postikyselyaineistoa. Kyselylomake lähetettiin kaikkiaan 3000 suomenkieliselle 18–74-vuotiaalle Suomessa asuvalle. Otantamenetelmänä käytettiin yksinkertaista satun-naisotantaa. Aineistossa olevien tapausten lukumäärä on 1 354 ja aineiston lopullinen vastausprosentti 46. Aineiston keruusta ja tallennuksesta vastasi Turun yliopiston taloussosiologian oppiaine. Aineiston keruun kustannuksiin osallistuivat lisäksi Turun yliopiston ja Jyväskylän yliopiston sosiologian oppiaineet. Julkaisussa esitellään aluksi aineiston keräämisprosessi sekä arvioidaan aineiston katoa ja sen vaikutusta aineiston edustavuuteen. Sen jälkeen esitellään Suomi 2014 -kyselyssä käytettyjä uusia kysymystyyppejä. Julkaisun lopussa tarkastellaan sitä, miten suomalaisten kulutukseen ja elämäntapaan viittaavat asenteet, arvomaailma ja poliittinen suuntautuminen ovat muuttuneet vuosina 1999–2014.
Resumo:
A rotating machine usually consists of a rotor and bearings that supports it. The nonidealities in these components may excite vibration of the rotating system. The uncontrolled vibrations may lead to excessive wearing of the components of the rotating machine or reduce the process quality. Vibrations may be harmful even when amplitudes are seemingly low, as is usually the case in superharmonic vibration that takes place below the first critical speed of the rotating machine. Superharmonic vibration is excited when the rotational velocity of the machine is a fraction of the natural frequency of the system. In such a situation, a part of the machine’s rotational energy is transformed into vibration energy. The amount of vibration energy should be minimised in the design of rotating machines. The superharmonic vibration phenomena can be studied by analysing the coupled rotor-bearing system employing a multibody simulation approach. This research is focused on the modelling of hydrodynamic journal bearings and rotorbearing systems supported by journal bearings. In particular, the non-idealities affecting the rotor-bearing system and their effect on the superharmonic vibration of the rotating system are analysed. A comparison of computationally efficient journal bearing models is carried out in order to validate one model for further development. The selected bearing model is improved in order to take the waviness of the shaft journal into account. The improved model is implemented and analyzed in a multibody simulation code. A rotor-bearing system that consists of a flexible tube roll, two journal bearings and a supporting structure is analysed employing the multibody simulation technique. The modelled non-idealities are the shell thickness variation in the tube roll and the waviness of the shaft journal in the bearing assembly. Both modelled non-idealities may cause subharmonic resonance in the system. In multibody simulation, the coupled effect of the non-idealities can be captured in the analysis. Additionally one non-ideality is presented that does not excite the vibrations itself but affects the response of the rotorbearing system, namely the waviness of the bearing bushing which is the non-rotating part of the bearing system. The modelled system is verified with measurements performed on a test rig. In the measurements the waviness of bearing bushing was not measured and therefore it’s affect on the response was not verified. In conclusion, the selected modelling approach is an appropriate method when analysing the response of the rotor-bearing system. When comparing the simulated results to the measured ones, the overall agreement between the results is concluded to be good.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
The ability to recognize potential knowledge and convert it into business opportunities is one of the key factors of renewal in uncertain environments. This thesis examines absorptive capacity in the context of non-research and development innovation, with a primary focus on the social interaction that facilitates the absorption of knowledge. It proposes that everyone is and should be entitled to take part in the social interaction that shapes individual observations into innovations. Both innovation and absorptive capacity have been traditionally related to research and development departments and institutions. These innovations need to be adopted and adapted by others. This so-called waterfall model of innovations is only one aspect of new knowledge generation and innovation. In addition to this Science–Technology–Innovation perspective, more attention has been recently paid to the Doing–Using–Interacting mode of generating new knowledge and innovations. The amount of literature on absorptive capacity is vast, yet the concept is reified. The greater part of the literature links absorptive capacity to research and development departments. Some publications have focused on the nature of absorptive capacity in practice and the role of social interaction in enhancing it. Recent literature on absorptive capacity calls for studies that shed light on the relationship between individual absorptive capacity and organisational absorptive capacity. There has also been a call to examine absorptive capacity in non-research and development environments. Drawing on the literature on employee-driven innovation and social capital, this thesis looks at how individual observations and ideas are converted into something that an organisation can use. The critical phases of absorptive capacity, during which the ideas of individuals are incorporated into a group context, are assimilation and transformation. These two phases are seen as complementary: whereas assimilation is the application of easy-to-accept knowledge, transformation challenges the current way of thinking. The two require distinct kinds of social interaction and practices. The results of this study can been crystallised thus: “Enhancing absorptive capacity in practicebased non-research and development context is to organise the optimal circumstances for social interaction. Every individual is a potential source of signals leading to innovations. The individual, thus, recognises opportunities and acquires signals. Through the social interaction processes of assimilation and transformation, these signals are processed into the organisation’s reality and language. The conditions of creative social capital facilitate the interplay between assimilation and transformation. An organisation that strives for employee-driven innovation gains the benefits of a broader surface for opportunity recognition and faster absorption.” If organisations and managers become more aware of the benefits of enhancing absorptive capacity in practice, they have reason to assign resources to those practices that facilitate the creation of absorptive capacity. By recognising the underlying social mechanisms and structural features that lead either to assimilation or transformation, it is easier to balance between renewal and effective operations.
Resumo:
Summary