998 resultados para Numerical Knowledge


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Explanations of the marked individual differences in elementary school mathematical achievement and mathematical learning disability (MLD or dyscalculia) have involved domain-general factors (working memory, reasoning, processing speed and oral language) and numerical factors that include single-digit processing efficiency and multi-digit skills such as number system knowledge and estimation. This study of third graders (N = 258) finds both domain-general and numerical factors contribute independently to explaining variation in three significant arithmetic skills: basic calculation fluency, written multi-digit computation, and arithmetic word problems. Estimation accuracy and number system knowledge show the strongest associations with every skill and their contributions are both independent of each other and other factors. Different domain-general factors independently account for variation in each skill. Numeral comparison, a single digit processing skill, uniquely accounts for variation in basic calculation. Subsamples of children with MLD (at or below 10th percentile, n = 29) are compared with low achievement (LA, 11th to 25th percentiles, n = 42) and typical achievement (above 25th percentile, n = 187). Examination of these and subsets with persistent difficulties supports a multiple deficits view of number difficulties: most children with number difficulties exhibit deficits in both domain-general and numerical factors. The only factor deficit common to all persistent MLD children is in multi-digit skills. These findings indicate that many factors matter but multi-digit skills matter most in third grade mathematical achievement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims at evaluating how effective is knowledge disclosure in attenuating institutional negative reactions caused by uncertainties brought by firms’ new strategies that respond to novel technologies. The empirical setting is from an era of technological ferment, the period of the introduction of the voice over internet protocol (VoIP) in the USA in the early 2000’s. This technology led to the convergence of the wireline telecommu- nications and cable television industries. The Institutional Brokers’ Estimate System (also known as the I/B/E/S system) was used to capture reactions of securities analysts, a revealed important source of institutional pressure on firms’ strategies. For assessing knowledge disclosure, a coding technique and a established content analysis framework were used to quantitatively measure the non-numerical and unstructured data of transcripts of business events occurred at that time. Eventually, several binary response models were tested in order to assess the effect of knowledge disclosure on the probability of institutional positive reactions. The findings are that the odds of favorable institutional reactions increase when a specific kind of knowledge is disclosed. It can be concluded that knowledge disclosure can be considered as a weapon in technological changes situations, attenuating adverse institutional reactions to the companies’ strategies in environments of technological changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The need for biodiversity conservation is increasing at a rate much faster than the acquisition of knowledge of biodiversity, such as descriptions of new species and mapping species distributions. As global changes are winning the race against the acquisition of knowledge, many researchers resort to the use of surrogate groups to aid in conservation decisions. Reductions in taxonomic and numerical resolution are also desirable, because they could allow more rapid the acquisition of knowledge while requiring less effort, if little important information is lost. In this study, we evaluated the congruence among 22 taxonomic groups sampled in a tropical forest in the Amazon basin. Our aim was to evaluate if any of these groups could be used as surrogates for the others in monitoring programs. We also evaluated if the taxonomic or numerical resolution of possible surrogates could be reduced without greatly reducing the overall congruence. Congruence among plant groups was high, whereas the congruence among most animal groups was very low, except for anurans in which congruence values were only slightly lower than for plants. Liana (Bignoniaceae) was the group with highest congruence, even using genera presence-absence data. The congruence among groups was related to environmental factors, specifically the clay and phosphorous contents of soil. Several groups showed strong spatial clumping, but this was unrelated to the congruence among groups. The high degree of congruence of lianas with the other groups suggests that it may be a reasonable surrogate group, mainly for the other plant groups analyzed, if soil data are not available. Although lianas are difficult to count and identify, the number of studies on the ecology of lianas is increasing. Most of these studies have concluded that lianas are increasing in abundance in tropical forests. In addition to the high congruence, lianas are worth monitoring in their own right because they are sensitive to global warming and the increasing frequency and severity of droughts in tropical regions. Our findings suggest that the use of data on surrogate groups with relatively low taxonomic and numerical resolutions can be a reliable shortcut for biodiversity assessments, especially in megadiverse areas with high rates of habitat conversion, where the lack of biodiversity knowledge is pervasive. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the complex relationships between quantities measured by volcanic monitoring network and shallow magma processes is a crucial headway for the comprehension of volcanic processes and a more realistic evaluation of the associated hazard. This question is very relevant at Campi Flegrei, a volcanic quiescent caldera immediately north-west of Napoli (Italy). The system activity shows a high fumarole release and periodic ground slow movement (bradyseism) with high seismicity. This activity, with the high people density and the presence of military and industrial buildings, makes Campi Flegrei one of the areas with higher volcanic hazard in the world. In such a context my thesis has been focused on magma dynamics due to the refilling of shallow magma chambers, and on the geophysical signals detectable by seismic, deformative and gravimetric monitoring networks that are associated with this phenomenologies. Indeed, the refilling of magma chambers is a process frequently occurring just before a volcanic eruption; therefore, the faculty of identifying this dynamics by means of recorded signal analysis is important to evaluate the short term volcanic hazard. The space-time evolution of dynamics due to injection of new magma in the magma chamber has been studied performing numerical simulations with, and implementing additional features in, the code GALES (Longo et al., 2006), recently developed and still on the upgrade at the Istituto Nazionale di Geofisica e Vulcanologia in Pisa (Italy). GALES is a finite element code based on a physico-mathematical two dimensional, transient model able to treat fluids as multiphase homogeneous mixtures, compressible to incompressible. The fundamental equations of mass, momentum and energy balance are discretised both in time and space using the Galerkin Least-Squares and discontinuity-capturing stabilisation technique. The physical properties of the mixture are computed as a function of local conditions of magma composition, pressure and temperature.The model features enable to study a broad range of phenomenologies characterizing pre and sin-eruptive magma dynamics in a wide domain from the volcanic crater to deep magma feeding zones. The study of displacement field associated with the simulated fluid dynamics has been carried out with a numerical code developed by the Geophysical group at the University College Dublin (O’Brien and Bean, 2004b), with whom we started a very profitable collaboration. In this code, the seismic wave propagation in heterogeneous media with free surface (e.g. the Earth’s surface) is simulated using a discrete elastic lattice where particle interactions are controlled by the Hooke’s law. This method allows to consider medium heterogeneities and complex topography. The initial and boundary conditions for the simulations have been defined within a coordinate project (INGV-DPC 2004-06 V3_2 “Research on active volcanoes, precursors, scenarios, hazard and risk - Campi Flegrei”), to which this thesis contributes, and many researchers experienced on Campi Flegrei in volcanological, seismic, petrological, geochemical fields, etc. collaborate. Numerical simulations of magma and rock dynamis have been coupled as described in the thesis. The first part of the thesis consists of a parametric study aimed at understanding the eect of the presence in magma of carbon dioxide in magma in the convection dynamics. Indeed, the presence of this volatile was relevant in many Campi Flegrei eruptions, including some eruptions commonly considered as reference for a future activity of this volcano. A set of simulations considering an elliptical magma chamber, compositionally uniform, refilled from below by a magma with volatile content equal or dierent from that of the resident magma has been performed. To do this, a multicomponent non-ideal magma saturation model (Papale et al., 2006) that considers the simultaneous presence of CO2 and H2O, has been implemented in GALES. Results show that the presence of CO2 in the incoming magma increases its buoyancy force promoting convection ad mixing. The simulated dynamics produce pressure transients with frequency and amplitude in the sensitivity range of modern geophysical monitoring networks such as the one installed at Campi Flegrei . In the second part, simulations more related with the Campi Flegrei volcanic system have been performed. The simulated system has been defined on the basis of conditions consistent with the bulk of knowledge of Campi Flegrei and in particular of the Agnano-Monte Spina eruption (4100 B.P.), commonly considered as reference for a future high intensity eruption in this area. The magmatic system has been modelled as a long dyke refilling a small shallow magma chamber; magmas with trachytic and phonolitic composition and variable volatile content of H2O and CO2 have been considered. The simulations have been carried out changing the condition of magma injection, the system configuration (magma chamber geometry, dyke size) and the resident and refilling magma composition and volatile content, in order to study the influence of these factors on the simulated dynamics. Simulation results allow to follow each step of the gas-rich magma ascent in the denser magma, highlighting the details of magma convection and mixing. In particular, the presence of more CO2 in the deep magma results in more ecient and faster dynamics. Through this simulations the variation of the gravimetric field has been determined. Afterward, the space-time distribution of stress resulting from numerical simulations have been used as boundary conditions for the simulations of the displacement field imposed by the magmatic dynamics on rocks. The properties of the simulated domain (rock density, P and S wave velocities) have been based on data from literature on active and passive tomographic experiments, obtained through a collaboration with A. Zollo at the Dept. of Physics of the Federici II Univeristy in Napoli. The elasto-dynamics simulations allow to determine the variations of the space-time distribution of deformation and the seismic signal associated with the studied magmatic dynamics. In particular, results show that these dynamics induce deformations similar to those measured at Campi Flegrei and seismic signals with energies concentrated on the typical frequency bands observed in volcanic areas. The present work shows that an approach based on the solution of equations describing the physics of processes within a magmatic fluid and the surrounding rock system is able to recognise and describe the relationships between geophysical signals detectable on the surface and deep magma dynamics. Therefore, the results suggest that the combined study of geophysical data and informations from numerical simulations can allow in a near future a more ecient evaluation of the short term volcanic hazard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wave breaking is an important coastal process, influencing hydro-morphodynamic processes such as turbulence generation and wave energy dissipation, run-up on the beach and overtopping of coastal defence structures. During breaking, waves are complex mixtures of air and water (“white water”) whose properties affect velocity and pressure fields in the vicinity of the free surface and, depending on the breaker characteristics, different mechanisms for air entrainment are usually observed. Several laboratory experiments have been performed to investigate the role of air bubbles in the wave breaking process (Chanson & Cummings, 1994, among others) and in wave loading on vertical wall (Oumeraci et al., 2001; Peregrine et al., 2006, among others), showing that the air phase is not negligible since the turbulent energy dissipation involves air-water mixture. The recent advancement of numerical models has given valuable insights in the knowledge of wave transformation and interaction with coastal structures. Among these models, some solve the RANS equations coupled with a free-surface tracking algorithm and describe velocity, pressure, turbulence and vorticity fields (Lara et al. 2006 a-b, Clementi et al., 2007). The single-phase numerical model, in which the constitutive equations are solved only for the liquid phase, neglects effects induced by air movement and trapped air bubbles in water. Numerical approximations at the free surface may induce errors in predicting breaking point and wave height and moreover, entrapped air bubbles and water splash in air are not properly represented. The aim of the present thesis is to develop a new two-phase model called COBRAS2 (stands for Cornell Breaking waves And Structures 2 phases), that is the enhancement of the single-phase code COBRAS0, originally developed at Cornell University (Lin & Liu, 1998). In the first part of the work, both fluids are considered as incompressible, while the second part will treat air compressibility modelling. The mathematical formulation and the numerical resolution of the governing equations of COBRAS2 are derived and some model-experiment comparisons are shown. In particular, validation tests are performed in order to prove model stability and accuracy. The simulation of the rising of a large air bubble in an otherwise quiescent water pool reveals the model capability to reproduce the process physics in a realistic way. Analytical solutions for stationary and internal waves are compared with corresponding numerical results, in order to test processes involving wide range of density difference. Waves induced by dam-break in different scenarios (on dry and wet beds, as well as on a ramp) are studied, focusing on the role of air as the medium in which the water wave propagates and on the numerical representation of bubble dynamics. Simulations of solitary and regular waves, characterized by both spilling and plunging breakers, are analyzed with comparisons with experimental data and other numerical model in order to investigate air influence on wave breaking mechanisms and underline model capability and accuracy. Finally, modelling of air compressibility is included in the new developed model and is validated, revealing an accurate reproduction of processes. Some preliminary tests on wave impact on vertical walls are performed: since air flow modelling allows to have a more realistic reproduction of breaking wave propagation, the dependence of wave breaker shapes and aeration characteristics on impact pressure values is studied and, on the basis of a qualitative comparison with experimental observations, the numerical simulations achieve good results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As land is developed, the impervious surfaces that are created increase the amount of runoff during rainfall events, disrupting the natural hydrologic cycle, with an increment in volume of runoff and in pollutant loadings. Pollutants deposited or derived from an activity on the land surface will likely end up in stormwater runoff in some concentration, such as nutrients, sediment, heavy metals, hydrocarbons, gasoline additives, pathogens, deicers, herbicides and pesticides. Several of these pollutants are particulate-bound, so it appears clear that sediment removal can provide significant water-quality improvements and it appears to be important the knowledge of the ability of stromwater treatment devices to retain particulate matter. For this reason three different units which remove sediments have been tested through laboratory. In particular a roadside gully pot has been tested under steady hydraulic conditions, varying the characteristics of the influent solids (diameter, particle size distribution and specific gravity). The efficiency in terms of particles retained has been evaluated as a function of influent flow rate and particles characteristics; results have been compared to efficiency evaluated applying an overflow rate model. Furthermore the role of particles settling velocity in efficiency determination has been investigated. After the experimental runs on the gully pot, a standard full-scale model of an hydrodynamic separator (HS) has been tested under unsteady influent flow rate condition, and constant solid concentration at the input. The results presented in this study illustrate that particle separation efficiency of the unit is predominately influenced by operating flow rate, which strongly affects the particles and hydraulic residence time of the system. The efficiency data have been compared to results obtained from a modified overflow rate model; moreover the residence time distribution has been experimentally determined through tracer analyses for several steady flow rates. Finally three testing experiments have been performed for two different configurations of a full-scale model of a clarifier (linear and crenulated) under unsteady influent flow rate condition, and constant solid concentration at the input. The results illustrate that particle separation efficiency of the unit is predominately influenced by the configuration of the unit itself. Turbidity measures have been used to compare turbidity with the suspended sediments concentration, in order to find a correlation between these two values, which can allow to have a measure of the sediments concentration simply installing a turbidity probe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A way to investigate turbulence is through experiments where hot wire measurements are performed. Analysis of the in turbulence of a temperature gradient on hot wire measurements is the aim of this thesis work. Actually - to author's knowledge - this investigation is the first attempt to document, understand and ultimately correct the effect of temperature gradients on turbulence statistics. However a numerical approach is used since instantaneous temperature and streamwise velocity fields are required to evaluate this effect. A channel flow simulation at Re_tau = 180 is analyzed to make a first evaluation of the amount of error introduced by temperature gradient inside the domain. Hot wire data field is obtained processing the numerical flow field through the application of a proper version of the King's law, which connect voltage, velocity and temperature. A drift in mean streamwise velocity profile and rms is observed when temperature correction is performed by means of centerline temperature. A correct mean velocity pro�le is achieved correcting temperature through its mean value at each wall normal position, but a not negligible error is still present into rms. The key point to correct properly the sensed velocity from the hot wire is the knowledge of the instantaneous temperature field. For this purpose three correction methods are proposed. At the end a numerical simulation at Re_tau =590 is also evaluated in order to confirm the results discussed earlier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The last decade has witnessed very fast development in microfabrication technologies. The increasing industrial applications of microfluidic systems call for more intensive and systematic knowledge on this newly emerging field. Especially for gaseous flow and heat transfer at microscale, the applicability of conventional theories developed at macro scale is not yet completely validated; this is mainly due to scarce experimental data available in literature for gas flows. The objective of this thesis is to investigate these unclear elements by analyzing forced convection for gaseous flows through microtubes and micro heat exchangers. Experimental tests have been performed with microtubes having various inner diameters, namely 750 m, 510 m and 170 m, over a wide range of Reynolds number covering the laminar region, the transitional zone and also the onset region of the turbulent regime. The results show that conventional theory is able to predict the flow friction factor when flow compressibility does not appear and the effect of fluid temperature-dependent properties is insignificant. A double-layered microchannel heat exchanger has been designed in order to study experimentally the efficiency of a gas-to-gas micro heat exchanger. This microdevice contains 133 parallel microchannels machined into polished PEEK plates for both the hot side and the cold side. The microchannels are 200 µm high, 200 µm wide and 39.8 mm long. The design of the micro device has been made in order to be able to test different materials as partition foil with flexible thickness. Experimental tests have been carried out for five different partition foils, with various mass flow rates and flow configurations. The experimental results indicate that the thermal performance of the countercurrent and cross flow micro heat exchanger can be strongly influenced by axial conduction in the partition foil separating the hot gas flow and cold gas flow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quench characteristics of second generation (2 G) YBCO Coated Conductor (CC) tapes are of fundamental importance for the design and safe operation of superconducting cables and magnets based on this material. Their ability to transport high current densities at high temperature, up to 77 K, and at very high fields, over 20 T, together with the increasing knowledge in their manufacturing, which is reducing their cost, are pushing the use of this innovative material in numerous system applications, from high field magnets for research to motors and generators as well as for cables. The aim of this Ph. D. thesis is the experimental analysis and numerical simulations of quench in superconducting HTS tapes and coils. A measurements facility for the characterization of superconducting tapes and coils was designed, assembled and tested. The facility consist of a cryostat, a cryocooler, a vacuum system, resistive and superconducting current leads and signal feedthrough. Moreover, the data acquisition system and the software for critical current and quench measurements were developed. A 2D model was developed using the finite element code COMSOL Multiphysics R . The problem of modeling the high aspect ratio of the tape is tackled by multiplying the tape thickness by a constant factor, compensating the heat and electrical balance equations by introducing a material anisotropy. The model was then validated both with the results of a 1D quench model based on a non-linear electric circuit coupled to a thermal model of the tape, to literature measurements and to critical current and quench measurements made in the cryogenic facility. Finally the model was extended to the study of coils and windings with the definition of the tape and stack homogenized properties. The procedure allows the definition of a multi-scale hierarchical model, able to simulate the windings with different degrees of detail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to represent the transport and fate of an oil slick at the sea surface is a formidable task. By using an accurate numerical representation of oil evolution and movement in seawater, the possibility to asses and reduce the oil-spill pollution risk can be greatly improved. The blowing of the wind on the sea surface generates ocean waves, which give rise to transport of pollutants by wave-induced velocities that are known as Stokes’ Drift velocities. The Stokes’ Drift transport associated to a random gravity wave field is a function of the wave Energy Spectra that statistically fully describe it and that can be provided by a wave numerical model. Therefore, in order to perform an accurate numerical simulation of the oil motion in seawater, a coupling of the oil-spill model with a wave forecasting model is needed. In this Thesis work, the coupling of the MEDSLIK-II oil-spill numerical model with the SWAN wind-wave numerical model has been performed and tested. In order to improve the knowledge of the wind-wave model and its numerical performances, a preliminary sensitivity study to different SWAN model configuration has been carried out. The SWAN model results have been compared with the ISPRA directional buoys located at Venezia, Ancona and Monopoli and the best model settings have been detected. Then, high resolution currents provided by a relocatable model (SURF) have been used to force both the wave and the oil-spill models and its coupling with the SWAN model has been tested. The trajectories of four drifters have been simulated by using JONSWAP parametric spectra or SWAN directional-frequency energy output spectra and results have been compared with the real paths traveled by the drifters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Turbulent energy dissipation is presented in the theoretical context of the famous Kolmogorov theory, formulated in 1941. Some remarks and comments about this theory help the reader understand the approach to turbulence study, as well as give some basic insights to the problem. A clear distinction is made amongst dissipation, pseudo-dissipation and dissipation surrogates. Dissipation regulates how turbulent kinetic energy in a flow gets transformed into internal energy, which makes this quantity a fundamental characteristic to investigate in order to enhance our understanding of turbulence. The dissertation focuses on experimental investigation of the pseudo-dissipation. Indeed this quantity is difficult to measure as it requires the knowledge of all the possible derivatives of the three dimensional velocity field. Once considering an hot-wire technique to measure dissipation we need to deal with surrogates of dissipation, since not all the terms can be measured. The analysis of surrogates is the main topic of this work. In particular two flows, the turbulent channel and the turbulent jet, are considered. These canonic flows, introduced in a brief fashion, are often used as a benchmark for CFD solvers and experimental equipment due to their simple structure. Observations made in the canonic flows are often transferable to more complicated and interesting cases, with many industrial applications. The main tools of investigation are DNS simulations and experimental measures. DNS data are used as a benchmark for the experimental results since all the components of dissipation are known within the numerical simulation. The results of some DNS were already available at the start of this thesis, so the main work consisted in reading and processing the data. Experiments were carried out by means of hot-wire anemometry, described in detail on a theoretical and practical level. The study of DNS data of a turbulent channel at Re=298 reveals that the traditional surrogate can be improved Consequently two new surrogates are proposed and analysed, based on terms of the velocity gradient that are easy to measure experimentally. We manage to find a formulation that improves the accuracy of surrogates by an order of magnitude. For the jet flow results from a DNS at Re=1600 of a temporal jet, and results from our experimental facility CAT at Re=70000, are compared to validate the experiment. It is found that the ratio between components of the dissipation differs between DNS and experimental data. Possible errors in both sets of data are discussed, and some ways to improve the data are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The self-regeneration capacity of articular cartilage is limited, due to its avascular and aneural nature. Loaded explants and cell cultures demonstrated that chondrocyte metabolism can be regulated via physiologic loading. However, the explicit ranges of mechanical stimuli that correspond to favourable metabolic response associated with extracellular matrix (ECM) synthesis are elusive. Unsystematic protocols lacking this knowledge produce inconsistent results. This study aims to determine the intrinsic ranges of physical stimuli that increase ECM synthesis and simultaneously inhibit nitric oxide (NO) production in chondrocyte-agarose constructs, by numerically re-evaluating the experiments performed by Tsuang et al. (2008). Twelve loading patterns were simulated with poro-elastic finite element models in ABAQUS. Pressure on solid matrix, von Mises stress, maximum principle stress and pore pressure were selected as intrinsic mechanical stimuli. Their development rates and magnitudes at the steady state of cyclic loading were calculated with MATLAB at the construct level. Concurrent increase in glycosaminoglycan and collagen was observed at 2300 Pa pressure and 40 Pa/s pressure rate. Between 0-1500 Pa and 0-40 Pa/s, NO production was consistently positive with respect to controls, whereas ECM synthesis was negative in the same range. A linear correlation was found between pressure rate and NO production (R = 0.77). Stress states identified in this study are generic and could be used to develop predictive algorithms for matrix production in agarose-chondrocyte constructs of arbitrary shape, size and agarose concentration. They could also be helpful to increase the efficacy of loading protocols for avascular tissue engineering. Copyright (c) 2010 John Wiley \& Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The twenty-first century has seen a further dramatic increase in the use of quantitative knowledge for governing social life after its explosion in the 1980s. Indicators and rankings play an increasing role in the way governmental and non-governmental organizations distribute attention, make decisions, and allocate scarce resources. Quantitative knowledge promises to be more objective and straightforward as well as more transparent and open for public debate than qualitative knowledge, thus producing more democratic decision-making. However, we know little about the social processes through which this knowledge is constituted nor its effects. Understanding how such numeric knowledge is produced and used is increasingly important as proliferating technologies of quantification alter modes of knowing in subtle and often unrecognized ways. This book explores the implications of the global multiplication of indicators as a specific technology of numeric knowledge production used in governance. Combination of insights from anthropology of law, history of science, science and technology studies, sociology of quantification, economics and geography will appeal to those who are uncomfortable with the separation between 'theoretical' and 'empirical' approaches and with the current weakness of critique that address the main trends shaping the relations between capitalism, markets, law and democracy Theoretical discussion of the nature and historical formation of quantification will appeal to those who ask questions such as, 'What is new or different about our contemporary reliance on quantitative knowledge?' Groundbreaking empirical case studies uncover the social work and politics that often go into the making of indicators and explore the far-reaching effects and impacts of these numerical representations in specific settings

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the key factors behind the growth in global trade in recent decades is an increase in intermediate input as a result of the development of vertical production networks (Feensta, 1998). It is widely recognized that the formation of production networks is due to the expansion of multinational enterprises' (MNEs) activities. MNEs have been differentiated into two types according to their production structure: horizontal and vertical foreign direct investment (FDI). In this paper, we extend the model presented by Zhang and Markusen (1999) to include horizontal and vertical FDI in a model with traded intermediates, using numerical general equilibrium analysis. The simulation results show that horizontal MNEs are more likely to exist when countries are similar in size and in relative factor endowments. Vertical MNEs are more likely to exist when countries differ in relative factor endowments, and trade costs are positive. From the results of the simulation, lower trade costs of final goods and differences in factor intensity are conditions for attracting vertical MNEs.