888 resultados para Fixed source
Resumo:
MCNP has stood so far as one of the main Monte Carlo radiation transport codes. Its use, as any other Monte Carlo based code, has increased as computers perform calculations faster and become more affordable along time. However, the use of Monte Carlo method to tally events in volumes which represent a small fraction of the whole system may turn to be unfeasible, if a straight analogue transport procedure (no use of variance reduction techniques) is employed and precise results are demanded. Calculations of reaction rates in activation foils placed in critical systems turn to be one of the mentioned cases. The present work takes advantage of the fixed source representation from MCNP to perform the above mentioned task in a more effective sampling way (characterizing neutron population in the vicinity of the tallying region and using it in a geometric reduced coupled simulation). An extended analysis of source dependent parameters is studied in order to understand their influence on simulation performance and on validity of results. Although discrepant results have been observed for small enveloping regions, the procedure presents itself as very efficient, giving adequate and precise results in shorter times than the standard analogue procedure. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Esta tese mostra a modelagem 2,5D de dados sintéticos do Método Eletromagnético a Multi-frequência (EMMF). O trabalho é apresentado em duas partes: a primeira apresenta os detalhes dos métodos usados nos cálculos dos campos gerados por uma bobina horizontal de corrente colocada sobre a superfície de modelos bidimensionais; e a segunda, usa os resultados obtidos para simular os dados medidos no método EMMF, que são as partes real e imaginária da componente radial do campo magnético gerado pela bobina. Nesta segunda parte, observamos o comportamento do campo calculado em diversos modelos, incluindo variações nas propriedades físicas e na geometria dos mesmos, com o intuito de verificar a sensibilidade do campo observado com relação às estruturas presentes em uma bacia sedimentar. Com esta modelagem, podemos observar as características dos dados e como as duas partes, real e imaginária, contribuem com informações distintas e complementares. Os resultados mostram que os dados da componente radial do campo magnético apresentam muito boa resolução lateral, mesmo estando a fonte fixa em uma única posição. A capacidade desses dados em distinguir e resolver estruturas alvo será fundamental para o trabalho futuro de inversão, bem como para a construção de seções de resistividade aparente.
Resumo:
This paper analyzes the impact of transceiver impairments on outage probability (OP) and throughput of decode-and-forward two-way cognitive relay (TWCR) networks, where the relay is self-powered by harvesting energy from the transmitted signals. We consider two bidirectional relaying protocols namely, multiple access broadcast (MABC) protocol and time division broadcast (TDBC) protocol, as well as, two power transfer policies namely, dual-source (DS) energy transfer and single-fixed-source (SFS) energy transfer. Closed-form expressions for OP and throughput of the network are derived in the context of delay-limited transmission. Numerical results corroborate our analysis, thereby we can quantify the degradation of OP and throughput of TWCR networks due to transceiver hardware impairments. Under the specific parameters, our results indicate that the MABC protocol achieves asymptotically a higher throughput by 0.65 [bits/s/Hz] than the TDBC protocol, while the DS energy transfer scheme offers better performance than the SFS policy for both relaying protocols.
Resumo:
The diffusion of mobile telephony began in 1971 in Finland, when the first car phones, called ARP1 were taken to use. Technologies changed from ARP to NMT and later to GSM. The main application of the technology, however, was voice transfer. The birth of the Internet created an open public data network and easy access to other types of computer-based services over networks. Telephones had been used as modems, but the development of the cellular technologies enabled automatic access from mobile phones to Internet. Also other wireless technologies, for instance Wireless LANs, were also introduced. Telephony had developed from analog to digital in fixed networks and allowed easy integration of fixed and mobile networks. This development opened a completely new functionality to computers and mobile phones. It also initiated the merger of the information technology (IT) and telecommunication (TC) industries. Despite the arising opportunity for firms' new competition the applications based on the new functionality were rare. Furthermore, technology development combined with innovation can be disruptive to industries. This research focuses on the new technology's impact on competition in the ICT industry through understanding the strategic needs and alternative futures of the industry's customers. The change speed inthe ICT industry is high and therefore it was valuable to integrate the DynamicCapability view of the firm in this research. Dynamic capabilities are an application of the Resource-Based View (RBV) of the firm. As is stated in the literature, strategic positioning complements RBV. This theoretical framework leads theresearch to focus on three areas: customer strategic innovation and business model development, external future analysis, and process development combining these two. The theoretical contribution of the research is in the development of methodology integrating theories of the RBV, dynamic capabilities and strategic positioning. The research approach has been constructive due to the actual managerial problems initiating the study. The requirement for iterative and innovative progress in the research supported the chosen research approach. The study applies known methods in product development, for instance, innovation process in theGroup Decision Support Systems (GDSS) laboratory and Quality Function Deployment (QFD), and combines them with known strategy analysis tools like industry analysis and scenario method. As the main result, the thesis presents the strategic innovation process, where new business concepts are used to describe the alternative resource configurations and scenarios as alternative competitive environments, which can be a new way for firms to achieve competitive advantage in high-velocity markets. In addition to the strategic innovation process as a result, thestudy has also resulted in approximately 250 new innovations for the participating firms, reduced technology uncertainty and helped strategic infrastructural decisions in the firms, and produced a knowledge-bank including data from 43 ICT and 19 paper industry firms between the years 1999 - 2004. The methods presentedin this research are also applicable to other industries.
Resumo:
In Lamium album, sucrose and raffinose-family oligosaccharides are the major products of photosynthesis that are stored in leaves. Using gas analysis and 14CO2 feeding, we compared photosynthesis and the partitioning of recently-fixed carbon in plants where sink activity was lowered by excision of flowers and chilling of roots with those where sink activity was not modified. Reduction in sink activity led to a reduction in the maximum rate of photosynthesis, to retention of fixed carbon in source leaves and to the progressive accumulation of raffinose-family oligosaccharides. This ultimately affected the extractable activities of invertase and sucrose phosphate synthase. At the end of the light period, invertase activity was significantly higher in treated plants. By contrast sucrose phosphate synthase activity was significantly lower in treated plants. We propose that reducing sink activity in L. album is associated with a shift in metabolism away from starch and sucrose synthesis and towards sucrose catabolism, galactinol utilisation and the synthesis of raffinose-family oligosaccharides.
Resumo:
El Antígeno Leucocitario Humano (HLA en inglés) ha sido descrito en muchos casos como factor de pronóstico para cáncer. La característica principal de los genes de HLA, localizados en el cromosoma 6 (6p21.3), son sus numerosos polimorfismos. Los análisis de secuencia de nucleótidos muestran que la variación está restringida predominantemente a los exones que codifican los dominios de unión a péptidos de la proteína. Por lo tanto, el polimorfismo del HLA define el repertorio de péptidos que se unen a los alotipos de HLA y este hecho define la habilidad de un individuo para responder a la exposición a muchos agentes infecciosos durante su vida. La tipificación de HLA se ha convertido en un análisis importante en clínica. Muestras de tejido embebidas en parafina y fijadas con formalina (FFPE en inglés) son recolectadas rutinariamente en oncología. Este procedimiento podría ser utilizado como una buena fuente de ADN, dado que en estudios en el pasado los ensayos de recolección de ADN no eran normalmente llevados a cabo de casi ningún tejido o muestra en procedimientos clínicos regulares. Teniendo en cuenta que el problema más importante con el ADN de muestras FFPE es la fragmentación, nosotros propusimos un nuevo método para la tipificación del alelo HLA-A desde muestras FFPE basado en las secuencias del exón 2, 3 y 4. Nosotros diseñamos un juego de 12 cebadores: cuatro para el exón 2 de HLA-A, tres para el exón 3 de HLA-A y cinco para el exón 4 de HLA-A, cada uno de acuerdo las secuencias flanqueantes de su respectivo exón y la variación en la secuencia entre diferentes alelos. 17 muestran FFPE colectadas en el Hospital Universitario de Karolinska en Estocolmo Suecia fueron sometidas a PCR y los productos fueron secuenciados. Finalmente todas las secuencias obtenidas fueron analizadas y comparadas con la base de datos del IMGT-HLA. Las muestras FFPE habían sido previamente tipificadas para HLA y los resultados fueron comparados con los de este método. De acuerdo con nuestros resultados, las muestras pudieron ser correctamente secuenciadas. Con este procedimiento, podemos concluir que nuestro estudio es el primer método de tipificación basado en secuencia que permite analizar muestras viejas de ADN de las cuales no se tiene otra fuente. Este estudio abre la posibilidad de desarrollar análisis para establecer nuevas relaciones entre HLA y diferentes enfermedades como el cáncer también.
Resumo:
Four perfluorocarbon tracer dispersion experiments were carried out in central London, United Kingdom in 2004. These experiments were supplementary to the dispersion of air pollution and penetration into the local environment (DAPPLE) campaign and consisted of ground level releases, roof level releases and mobile releases; the latter are believed to be the first such experiments to be undertaken. A detailed description of the experiments including release, sampling, analysis and wind observations is given. The characteristics of dispersion from the fixed and mobile sources are discussed and contrasted, in particular, the decay in concentration levels away from the source location and the additional variability that results from the non-uniformity of vehicle speed. Copyright © 2009 Royal Meteorological Society
Resumo:
The pupunha (Guilielma speciosa) is the fruit of a palm tree typical of the Brazilian Northern region, whose stem is used as a source of heart of palm. The fruit, which is about 65% pulp, is a source of oil and carotenes. In the present work, an analysis of the kinetics of supercritical extraction of oil from the pupunha pulp is presented. Carbon dioxide was used as solvent. The extractions were carried out at 25 MPa and 323 K and 30 MPa and 318 K. The chemical composition of the extracts in terms of fatty acids was determined by gas chromatography. The amount of oleic acid, a saturated fatty acid, in the CO2 extracts was larger than that in the extract obtained with hexane. The overall extraction curves were modeled using the single-parameter model proposed in the literature to describe the desorption of toluene from activated coal.
Resumo:
An excitation force that is not influenced by the system's states is said to be an ideal energy source. In real situations, a direct and feedback coupling between the excitation source and the system must always exist. This manifestation of the law of conversation of energy is known as Sommerfeld Effect. In the case of obtaining a mathematical model for such system, additional equations are usually necessary to describe the vibration sources and their coupling with the mechanical system. In this work, a cantilever beam and a non-ideal electric DC motor that is fixed to the beam free end is analyzed. The motor has an unbalanced mass that provides excitation to the system proportional to the current applied to the motor. During the motor's coast up operation, as the excitation frequency gets closer to the beam first natural frequency and if the drive power increases further, the DC motor speed remains constant until it suddenly jumps to a much higher value (simultaneously the vibration amplitude jumps to a much lower value) upon exceeding a critical input power. It was found that the Sommerfeld effect depends on some system parameters and the motor operational procedures. These parameters are explored to avoid the resonance capture in Sommerfeld effect. Numerical simulations and experimental tests are used to help insight this dynamic behavior.
Resumo:
Many bird species are attracted to landfills which take domestic or putrescible waste. These sites provide a reliable, rich source of food which can attract large concentrations of birds. The birds may cause conflicts with human interest with respect to noise, birds carrying litter off site, possible transmission of pathogens in bird droppings and the potential for birdstrikes. In the UK there is an 8 mile safeguarding radius around an airfield, within which any planning applications must pass scrutiny from regulatory bodies to show they will not attract birds into the area and increase the birdstrike risk. Peckfield Landfill site near Leeds, West Yorkshire was chosen for a trial of a netting system designed to exclude birds from domestic waste landfills. The site was assessed for bird numbers before the trial, during the netting trial and after the net had been removed. A ScanCord net was installed for 6 weeks, during which time all household waste was tipped inside the net. Gull numbers decreased on the site from a mean of 1074 per hourly count to 29 per hourly count after two days. The gull numbers increased again after the net had been removed. Bird concentrations in the surroundings were also monitored to assess the effect of the net. Bird numbers in the immediate vicinity of the landfill site were higher than those further away. When the net was installed, the bird concentrations adjacent to the landfill site decreased. Corvids were not affected by the net as they fed on covered waste which was available outside the net throughout the trial. This shows that bird problems on a landfill site are complex, requiring a comprehensive policy of bird control. A supporting bird scaring system and clear operating policy for sites near to airports would be required.
Resumo:
This study reports the performance of a combined anaerobic-aerobic packed-bed reactor that can be used to treat domestic sewage. Initially, a bench-scale reactor was operated in three experimental phases. In the first phase, the anaerobic reactor was operated with an average organic matter removal efficiency of 77% for a hydraulic retention time (HRT) of 10 h. In the second phase, the reactor was operated with an anaerobic stage followed by an aerobic zone, resulting in a mean value of 91% efficiency. In the third and final phase, the anaerobic-aerobic reactor was operated with recirculation of the effluent of the reactor through the anaerobic zone. The system yielded mean total nitrogen removal percentages of 65 and 75% for recycle ratios (r) of 0.5 and 1.5, respectively, and the chemical oxygen demand (COD) removal efficiencies were higher than 90%. When the pilot-scale reactor was operated with an HRT of 12 h and r values of 1.5 and 3.0, its performance was similar to that observed in the bench-scale unit (92% COD removal for r = 3.0). However, the nitrogen removal was lower (55% N removal for r = 3.0) due to problems with the hydrodynamics in the aerobic zone. The anaerobic-aerobic fixed-bed reactor with recirculation of the liquid phase allows for concomitant carbon and nitrogen removal without adding an exogenous source of electron donors and without requiring any additional alkalinity supplementation.
Resumo:
An investigation was undertaken to determine the chemical characterization of inhalable particulate matter in the Houston area, with special emphasis on source identification and apportionment of outdoor and indoor atmospheric aerosols using multivariate statistical analyses.^ Fine (<2.5 (mu)m) particle aerosol samples were collected by means of dichotomous samplers at two fixed site (Clear Lake and Sunnyside) ambient monitoring stations and one mobile monitoring van in the Houston area during June-October 1981 as part of the Houston Asthma Study. The mobile van allowed particulate sampling to take place both inside and outside of twelve homes.^ The samples collected for 12-h sampling on a 7 AM-7 PM and 7 PM-7 AM (CDT) schedule were analyzed for mass, trace elements, and two anions. Mass was determined gravimetrically. An energy-dispersive X-ray fluorescence (XRF) spectrometer was used for determination of elemental composition. Ion chromatography (IC) was used to determine sulfate and nitrate.^ Average chemical compositions of fine aerosol at each site were presented. Sulfate was found to be the largest single component in the fine fraction mass, comprising approximately 30% of the fine mass outdoors and 12% indoors, respectively.^ Principal components analysis (PCA) was applied to identify sources of aerosols and to assess the role of meteorological factors on the variation in particulate samples. The results suggested that meteorological parameters were not associated with sources of aerosol samples collected at these Houston sites.^ Source factor contributions to fine mass were calculated using a combination of PCA and stepwise multivariate regression analysis. It was found that much of the total fine mass was apparently contributed by sulfate-related aerosols. The average contributions to the fine mass coming from the sulfate-related aerosols were 56% of the Houston outdoor ambient fine particulate matter and 26% of the indoor fine particulate matter.^ Characterization of indoor aerosol in residential environments was compared with the results for outdoor aerosols. It was suggested that much of the indoor aerosol may be due to outdoor sources, but there may be important contributions from common indoor sources in the home environment such as smoking and gas cooking. ^
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
The kinetics of naphthalene-2-sulfonic acid (2-NSA) adsorption by granular activated carbon (GAC) were measured and the relationships between adsorption, desorption, bioavailability and biodegradation assessed. The conventional Langmuir model fitted the experimental sorption isotherm data and introduced 2-NSA degrading bacteria, established on the surface of the GAC, did not interfere with adsorption. The potential value of GAC as a microbial support in the aerobic degradation of 2-NSA by Arthrobacter globiformis and Comamonas testosteroni was investigated. Using both virgin and microbially colonised GAC, adsorption removed 2-NSA from the liquid phase up to its saturation capacity of 140 mg/g GAC within 48 h. However, between 83.2% and 93.3% of the adsorbed 2-NSA was bioavailable to both bacterial species as a source of carbon for growth. In comparison to the non-inoculated GAC, the combination of rapid adsorption and biodegradation increased the amount (by 70–93%) of 2-NSA removal from the influent phase as well as the bed-life of the GAC (from 40 to >120 d). A microbially conditioned GAC fixed-bed reactor containing 15 g GAC removed 100% 2-NSA (100 mg/l) from tannery wastewater at an empty bed contact time of 22 min for a minimum of 120 d without the need for GAC reconditioning or replacement. This suggests that small volume GAC bioreactors could be used for tannery wastewater recycling.
Resumo:
This paper examines the source country determinants of FDI into Japan. The paper highlights certain methodological and theoretical weaknesses in the previous literature and offers some explanations for hitherto ambiguous results. Specifically, the paper highlights the importance of panel data analysis, and the identification of fixed effects in the analysis rather than simply pooling the data. Indeed, we argue that many of the results reported elsewhere are a feature of this mis-specification. To this end, pooled, fixed effects and random effects estimates are compared. The results suggest that FDI into Japan is inversely related to trade flows, such that trade and FDI are substitutes. Moreover, the results also suggest that FDI increases with home country political and economic stability. The paper also shows that previously reported results, regarding the importance of exchange rates, relative borrowing costs and labour costs in explaining FDI flows, are sensitive to the econometric specification and estimation approach. The paper also discusses the importance of these results within a policy context. In recent years Japan has sought to attract FDI, though many firms still complain of barriers to inward investment penetration in Japan. The results show that cultural and geographic distance are only of marginal importance in explaining FDI, and that the results are consistent with the market-seeking explanation of FDI. As such, the attitude to risk in the source country is strongly related to the size of FDI flows to Japan. © 2007 The Authors Journal compilation © 2007 Blackwell Publishing Ltd.