947 resultados para Random coefficient logit (RCL) model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present global era in which firms choose the location of their plants beyond national borders, location characteristics are important for attracting multinational enterprises (MNEs). The better access to countries with large market is clearly attractive for MNEs. For example, special treatments on tariffs such as the Generalized System of Preferences (GSP) are beneficial for MNEs whose home country does not have such treatments. Not only such country characteristics but also region characteristics (i.e. province-level or city-level ones) matter, particularly in the case that location characteristics differ widely between a nation's regions. The existence of industrial concentration, that is, agglomeration, is a typical regional characteristic. It is with consideration of these country-level and region-level characteristics that MNEs decide their location abroad. A large number of academic studies have investigated in what kinds of countries MNEs locate, i.e. location choice analysis. Employing the usual new economic geography model (i.e. constant elasticity of substitution (CES) utility function, Dixit-Stiglitz monopolistic competition, and ice-berg trade costs), the literature derives the profit function, of which coefficients are estimated using maximum likelihood procedures. Recent studies are as follows: Head, Rise, and Swenson (1999) for Japanese MNEs in the US; Belderbos and Carree (2002) for Japanese MNEs in China; Head and Mayer (2004) for Japanese MNEs in Europe; Disdier and Mayer (2004) for French MNEs in Europe; Castellani and Zanfei (2004) for large MNEs worldwide; Mayer, Mejean, and Nefussi (2007) for French MNEs worldwide; Crozet, Mayer, and Mucchielli (2004) for MNEs in France; and Basile, Castellani, and Zanfei (2008) for MNEs in Europe. At the present time, three main topics can be found in this literature. The first introduces various location elements as independent variables. The above-mentioned new economic geography model usually yields the profit function, which is a function of market size, productive factor prices, price of intermediate goods, and trade costs. As a proxy for the price of intermediate goods, the measure of agglomeration is often used, particularly the number of manufacturing firms. Some studies employ more disaggregated numbers of manufacturing firms, such as the number of manufacturing firms with the same nationality as the firms choosing the location (e.g., Head et al., 1999; Crozet et al., 2004) or the number of firms belonging to the same firm group (e.g., Belderbos and Carree, 2002). As part of trade costs, some investment climate measures have been examined: free trade zones in the US (Head et al., 1999), special economic zones and opening coastal cities in China (Belderbos and Carree, 2002), and Objective 1 structural funds and cohesion funds in Europe (Basile et al., 2008). Second, the validity of proxy variables for location elements is further examined. Head and Mayer (2004) examine the validity of market potential on location choice. They propose the use of two measures: the Harris market potential index (Harris, 1954) and the Krugman-type index used in Redding and Venables (2004). The Harris-type index is simply the sum of distance-weighted real GDP. They employ the Krugman-type market potential index, which is directly derived from the new economic geography model, as it takes into account the extent of competition (i.e. price index) and is constructed using estimators of importing country dummy variables in the well-known gravity equation, as in Redding and Venables (2004). They find that "theory does not pay", in the sense that the Harris market potential outperforms Krugman's market potential in both the magnitude of its coefficient and the fit of the model to be estimated. The third topic explores the substitution of location by examining inclusive values in the nested-logit model. For example, using firm-level data on French investments both in France and abroad over the 1992-2002 period, Mayer et al. (2007) investigate the determinants of location choice and assess empirically whether the domestic economy has been losing attractiveness over the recent period or not. The estimated coefficient for inclusive value is strongly significant and near unity, indicating that the national economy is not different from the rest of the world in terms of substitution patterns. Similarly, Disdier and Mayer (2004) investigate whether French MNEs consider Western and Eastern Europe as two distinct groups of potential host countries by examining the coefficient for the inclusive value in nested-logit estimation. They confirm the relevance of an East-West structure in the country location decision and furthermore show that this relevance decreases over time. The purpose of this paper is to investigate the location choice of Japanese MNEs in Thailand, Cambodia, Laos, Myanmar, and Vietnam, and is closely related to the third topic mentioned above. By examining region-level location choice with the nested-logit model, I investigate the relative importance of not only country characteristics but also region characteristics. Such investigation is invaluable particularly in the case of location choice in those five countries: industrialization remains immature in those countries which have not yet succeeded in attracting enough MNEs, and as a result, it is expected that there are not yet crucial regional variations for MNEs within such a nation, meaning the country characteristics are still relatively important to attract MNEs. To illustrate, in the case of Cambodia and Laos, one of the crucial elements for Japanese MNEs would be that LDC preferential tariff schemes are available for exports from Cambodia and Laos. On the other hand, in the case of Thailand and Vietnam, which have accepted a relatively large number of MNEs and thus raised the extent of regional inequality, regional characteristics such as the existence of agglomeration would become important elements in location choice. Our sample countries seem, therefore, to offer rich variations for analyzing the relative importance between country characteristics and region characteristics. Our empirical strategy has a further advantage. As in the third topic in the location choice literature, the use of the nested-logit model enables us to examine substitution patterns between country-based and region-based location decisions by MNEs in the concerned countries. For example, it is possible to investigate empirically whether Japanese multinational firms consider Thailand/Vietnam and the other three countries as two distinct groups of potential host countries, by examining the inclusive value parameters in nested-logit estimation. In particular, our sample countries all experienced dramatic changes in, for example, economic growth or trade costs reduction during the sample period. Thus, we will find the dramatic dynamics of such substitution patterns. Our rigorous analysis of the relative importance between country characteristics and region characteristics is invaluable from the viewpoint of policy implications. First, while the former characteristics should be improved mainly by central government in each country, there is sometimes room for the improvement of the latter characteristics by even local governments or smaller institutions such as private agencies. Consequently, it becomes important for these smaller institutions to know just how crucial the improvement of region characteristics is for attracting foreign companies. Second, as economies grow, country characteristics become similar among countries. For example, the LCD preferential tariff schemes are available only when a country is less developed. Therefore, it is important particularly for the least developed countries to know what kinds of regional characteristics become important following economic growth; in other words, after their country characteristics become similar to those of the more developed countries. I also incorporate one important characteristic of MNEs, namely, productivity. The well-known Helpman-Melitz-Yeaple model indicates that only firms with higher productivity can afford overseas entry (Helpman et al., 2004). Beyond this argument, there may be some differences in MNEs' productivity among our sample countries and regions. Such differences are important from the viewpoint of "spillover effects" from MNEs, which are one of the most important results for host countries in accepting their entry. The spillover effects are that the presence of inward foreign direct investment (FDI) aises domestic firms' productivity through various channels such as imitation. Such positive effects might be larger in areas with more productive MNEs. Therefore, it becomes important for host countries to know how much productive firms are likely to invest in them. The rest of this paper is organized as follows. Section 2 takes a brief look at the worldwide distribution of Japanese overseas affiliates. Section 3 provides an empirical model to examine their location choice, and lastly, we discuss future works to estimate our model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a framework for an SCGE model that is compatible with the Armington assumption and explicitly considers transport activities. In the model, the trade coefficient takes the form of a potential function,and the equilibrium market price becomes similar to the price index of varietal goods in the context of new economic geography (NEG). The features of the model are investigated by using the minimal setting, which comprises two non-transport sectors and three regions. Because transport costs are given exogenously to facilitate study of their impacts, commodity prices are also determined relative to them. The model can be described as a system of homogeneous equations, where an output in one region can arbitrarily be determined similarly as a price in the Walrasian equilibrium. The model closure is sensitive to formulation consistency so that homogeneity of the system would be lost by use of an alternative form of trade coefficients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The authors are from UPM and are relatively grouped, and all have intervened in different academic or real cases on the subject, at different times as being of different age. With precedent from E. Torroja and A. Páez in Madrid Spain Safety Probabilistic models for concrete about 1957, now in ICOSSAR conferences, author J.M. Antón involved since autumn 1967 for euro-steel construction in CECM produced a math model for independent load superposition reductions, and using it a load coefficient pattern for codes in Rome Feb. 1969, practically adopted for European constructions, giving in JCSS Lisbon Feb. 1974 suggestion of union for concrete-steel-al.. That model uses model for loads like Gumbel type I, for 50 years for one type of load, reduced to 1 year to be added to other independent loads, the sum set in Gumbel theories to 50 years return period, there are parallel models. A complete reliability system was produced, including non linear effects as from buckling, phenomena considered somehow in actual Construction Eurocodes produced from Model Codes. The system was considered by author in CEB in presence of Hydraulic effects from rivers, floods, sea, in reference with actual practice. When redacting a Road Drainage Norm in MOPU Spain an optimization model was realized by authors giving a way to determine the figure of Return Period, 10 to 50 years, for the cases of hydraulic flows to be considered in road drainage. Satisfactory examples were a stream in SE of Spain with Gumbel Type I model and a paper of Ven Te Chow with Mississippi in Keokuk using Gumbel type II, and the model can be modernized with more varied extreme laws. In fact in the MOPU drainage norm the redacting commission acted also as expert to set a table of return periods for elements of road drainage, in fact as a multi-criteria complex decision system. These precedent ideas were used e.g. in wide Codes, indicated in symposia or meetings, but not published in journals in English, and a condensate of contributions of authors is presented. The authors are somehow involved in optimization for hydraulic and agro planning, and give modest hints of intended applications in presence of agro and environment planning as a selection of the criteria and utility functions involved in bayesian, multi-criteria or mixed decision systems. Modest consideration is made of changing in climate, and on the production and commercial systems, and on others as social and financial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Corrosion of reinforcing steel in concrete due to chloride ingress is one of the main causes of the deterioration of reinforced concrete structures. Structures most affected by such a corrosion are marine zone buildings and structures exposed to de-icing salts like highways and bridges. Such process is accompanied by an increase in volume of the corrosión products on the rebarsconcrete interface. Depending on the level of oxidation, iron can expand as much as six times its original volume. This increase in volume exerts tensile stresses in the surrounding concrete which result in cracking and spalling of the concrete cover if the concrete tensile strength is exceeded. The mechanism by which steel embedded in concrete corrodes in presence of chloride is the local breakdown of the passive layer formed in the highly alkaline condition of the concrete. It is assumed that corrosion initiates when a critical chloride content reaches the rebar surface. The mathematical formulation idealized the corrosion sequence as a two-stage process: an initiation stage, during which chloride ions penetrate to the reinforcing steel surface and depassivate it, and a propagation stage, in which active corrosion takes place until cracking of the concrete cover has occurred. The aim of this research is to develop computer tools to evaluate the duration of the service life of reinforced concrete structures, considering both the initiation and propagation periods. Such tools must offer a friendly interface to facilitate its use by the researchers even though their background is not in numerical simulation. For the evaluation of the initiation period different tools have been developed: Program TavProbabilidade: provides means to carry out a probability analysis of a chloride ingress model. Such a tool is necessary due to the lack of data and general uncertainties associated with the phenomenon of the chloride diffusion. It differs from the deterministic approach because it computes not just a chloride profile at a certain age, but a range of chloride profiles for each probability or occurrence. Program TavProbabilidade_Fiabilidade: carries out reliability analyses of the initiation period. It takes into account the critical value of the chloride concentration on the steel that causes breakdown of the passive layer and the beginning of the propagation stage. It differs from the deterministic analysis in that it does not predict if the corrosion is going to begin or not, but to quantifies the probability of corrosion initiation. Program TavDif_1D: was created to do a one dimension deterministic analysis of the chloride diffusion process by the finite element method (FEM) which numerically solves Fick’second Law. Despite of the different FEM solver already developed in one dimension, the decision to create a new code (TavDif_1D) was taken because of the need to have a solver with friendly interface for pre- and post-process according to the need of IETCC. An innovative tool was also developed with a systematic method devised to compare the ability of the different 1D models to predict the actual evolution of chloride ingress based on experimental measurements, and also to quantify the degree of agreement of the models with each others. For the evaluation of the entire service life of the structure: a computer program has been developed using finite elements method to do the coupling of both service life periods: initiation and propagation. The program for 2D (TavDif_2D) allows the complementary use of two external programs in a unique friendly interface: • GMSH - an finite element mesh generator and post-processing viewer • OOFEM – a finite element solver. This program (TavDif_2D) is responsible to decide in each time step when and where to start applying the boundary conditions of fracture mechanics module in function of the amount of chloride concentration and corrosion parameters (Icorr, etc). This program is also responsible to verify the presence and the degree of fracture in each element to send the Information of diffusion coefficient variation with the crack width. • GMSH - an finite element mesh generator and post-processing viewer • OOFEM – a finite element solver. The advantages of the FEM with the interface provided by the tool are: • the flexibility to input the data such as material property and boundary conditions as time dependent function. • the flexibility to predict the chloride concentration profile for different geometries. • the possibility to couple chloride diffusion (initiation stage) with chemical and mechanical behavior (propagation stage). The OOFEM code had to be modified to accept temperature, humidity and the time dependent values for the material properties, which is necessary to adequately describe the environmental variations. A 3-D simulation has been performed to simulate the behavior of the beam on both, action of the external load and the internal load caused by the corrosion products, using elements of imbedded fracture in order to plot the curve of the deflection of the central region of the beam versus the external load to compare with the experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We establish a refined version of the Second Law of Thermodynamics for Langevin stochastic processes describing mesoscopic systems driven by conservative or non-conservative forces and interacting with thermal noise. The refinement is based on the Monge-Kantorovich optimal mass transport and becomes relevant for processes far from quasi-stationary regime. General discussion is illustrated by numerical analysis of the optimal memory erasure protocol for a model for micron-size particle manipulated by optical tweezers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A constitutive model is presented for the in-plane mechanical behavior of nonwoven fabrics. The model is developed within the context of the finite element method and provides the constitutive response for a mesodomain of the fabric corresponding to the area associated to a finite element. The model is built upon the ensemble of three blocks, namely fabric, fibers and damage. The continuum tensorial formulation of the fabric response rigorously takes into account the effect of fiber rotation for large strains and includes the nonlinear fiber behavior. In addition, the various damage mechanisms experimentally observed (bond and fiber fracture, interfiber friction and fiber pull-out) are included in a phenomenological way and the random nature of these materials is also taken into account by means of a Monte Carlo lottery to determine the damage thresholds. The model results are validated with recent experimental results on the tensile response of smooth and notched specimens of a polypropylene nonwoven fabric.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present calculations of the absorption coefficient for transitions between the bound states of quantum dots grown within a semiconductor and the extended states of the conduction band. For completeness, transitions among bound states are also presented. In the separation of variables, single band k·p model is used in which most elements may be expressed analytically. The analytical formulae are collected in the appendix of this paper. It is concluded that the transitions are strong enough to provide a quick path to the conduction band for electrons pumped from the valence to the intermediate band

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although most of the research on Cognitive Radio is focused on communication bands above the HF upper limit (30 MHz), Cognitive Radio principles can also be applied to HF communications to make use of the extremely scarce spectrum more efficiently. In this work we consider legacy users as primary users since these users transmit without resorting to any smart procedure, and our stations using the HFDVL (HF Data+Voice Link) architecture as secondary users. Our goal is to enhance an efficient use of the HF band by detecting the presence of uncoordinated primary users and avoiding collisions with them while transmitting in different HF channels using our broad-band HF transceiver. A model of the primary user activity dynamics in the HF band is developed in this work to make short-term predictions of the sojourn time of a primary user in the band and avoid collisions. It is based on Hidden Markov Models (HMM) which are a powerful tool for modelling stochastic random processes and are trained with real measurements of the 14 MHz band. By using the proposed HMM based model, the prediction model achieves an average 10.3% prediction error rate with one minute-long channel knowledge but it can be reduced when this knowledge is extended: with the previous 8 min knowledge, an average 5.8% prediction error rate is achieved. These results suggest that the resulting activity model for the HF band could actually be used to predict primary users activity and included in a future HF cognitive radio based station.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The characteristics of the power-line communication (PLC) channel are difficult to model due to the heterogeneity of the networks and the lack of common wiring practices. To obtain the full variability of the PLC channel, random channel generators are of great importance for the design and testing of communication algorithms. In this respect, we propose a random channel generator that is based on the top-down approach. Basically, we describe the multipath propagation and the coupling effects with an analytical model. We introduce the variability into a restricted set of parameters and, finally, we fit the model to a set of measured channels. The proposed model enables a closed-form description of both the mean path-loss profile and the statistical correlation function of the channel frequency response. As an example of application, we apply the procedure to a set of in-home measured channels in the band 2-100 MHz whose statistics are available in the literature. The measured channels are divided into nine classes according to their channel capacity. We provide the parameters for the random generation of channels for all nine classes, and we show that the results are consistent with the experimental ones. Finally, we merge the classes to capture the entire heterogeneity of in-home PLC channels. In detail, we introduce the class occurrence probability, and we present a random channel generator that targets the ensemble of all nine classes. The statistics of the composite set of channels are also studied, and they are compared to the results of experimental measurement campaigns in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The boundary element method (BEM) has been applied successfully to many engineering problems during the last decades. Compared with domain type methods like the finite element method (FEM) or the finite difference method (FDM) the BEM can handle problems where the medium extends to infinity much easier than domain type methods as there is no need to develop special boundary conditions (quiet or absorbing boundaries) or infinite elements at the boundaries introduced to limit the domain studied. The determination of the dynamic stiffness of arbitrarily shaped footings is just one of these fields where the BEM has been the method of choice, especially in the 1980s. With the continuous development of computer technology and the available hardware equipment the size of the problems under study grew and, as the flop count for solving the resulting linear system of equations grows with the third power of the number of equations, there was a need for the development of iterative methods with better performance. In [1] the GMRES algorithm was presented which is now widely used for implementations of the collocation BEM. While the FEM results in sparsely populated coefficient matrices, the BEM leads, in general, to fully or densely populated ones, depending on the number of subregions, posing a serious memory problem even for todays computers. If the geometry of the problem permits the surface of the domain to be meshed with equally shaped elements a lot of the resulting coefficients will be calculated and stored repeatedly. The present paper shows how these unnecessary operations can be avoided reducing the calculation time as well as the storage requirement. To this end a similar coefficient identification algorithm (SCIA), has been developed and implemented in a program written in Fortran 90. The vertical dynamic stiffness of a single pile in layered soil has been chosen to test the performance of the implementation. The results obtained with the 3-d model may be compared with those obtained with an axisymmetric formulation which are considered to be the reference values as the mesh quality is much better. The entire 3D model comprises more than 35000 dofs being a soil region with 21168 dofs the biggest single region. Note that the memory necessary to store all coefficients of this single region is about 6.8 GB, an amount which is usually not available with personal computers. In the problem under study the interface zone between the two adjacent soil regions as well as the surface of the top layer may be meshed with equally sized elements. In this case the application of the SCIA leads to an important reduction in memory requirements. The maximum memory used during the calculation has been reduced to 1.2 GB. The application of the SCIA thus permits problems to be solved on personal computers which otherwise would require much more powerful hardware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The implementation of a charging policy for heavy goods vehicles in European Union (EU) member countries has been imposed to reflect costs of construction and maintenance of infrastructure as well as externalities such as congestion, accidents and environmental impact. In this context, EU countries approved the Eurovignette directive (1999/62/EC) and its amending directive (2006 /38/EC) which established a legal framework to regulate the system of tolls. Even if that regulation seek s to increase the efficien cy of freight, it will trigger direct and indirect effects on Spain’s regional economies by increasing transport costs. This paper presents the development of a multiregional Input-Output methodology (MRIO) with elastic trade coefficients to predict in terregional trade, using transport attributes integrated in multinomial logit models. This method is highly useful to carry out an ex-ante evaluation of transport policies because it involves road freight transport cost sensitivity, and determine regional distributive and substitution economic effect s of countries like Spain, characterized by socio-demographic and economic attributes, differentiated region by region. It will thus be possible to determine cost-effective strategies, given different policy scenarios. MRIO mode l would then be used to determine the impact on the employment rate of imposing a charge in the Madrid-Sevilla corridor in Spain. This methodology is important for measuring the impact on the employment rate since it is one of the main macroeconomic indicators of Spain’s regional and national economic situation. A previous research developed (DESTINO) using a MRIO method estimated employment impacts of road pricing policy across Spanish regions considering a fuel tax charge (€/liter) in the entire shortest cost path network for freight transport. Actually, it found that the variation in employment is expected to be substantial for some regions, and negligible for others. For example, in this Spanish case study of regional employment has showed reductions between 16.1% (Rioja) and 1.4% (Madrid region). This variation range seems to be related to either the intensity of freight transport in each region or dependency of regions to transport intensive economic sect ors. In fact, regions with freight transport intensive sectors will lose more jobs while regions with a predominantly service economy undergo a fairly insignificant loss of employment. This paper is focused on evaluating a freight transport vehicle-kilometer charge (€/km) in a non-tolled motorway corridor (A-4) between Madrid-Sevilla (517 Km.). The consequences of the road pricing policy implementation show s that the employment reductions are not as high as the diminution stated in the previous research because this corridor does not affect the whole freight transport system of Spain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the economic impact assessment of the construction of a new road on the regional distribution of jobs. The paper summarizes different existing model approaches considered to assess economic impacts through a literature review. Afterwards, we present the development of a comprehensive approach for analyzing the interaction of new transport infrastructure and the economic impact through an integrated model. This model has been applied to the construction of the motorway A-40 in Spain (497 Km.) which runs across three regions without passing though Madrid City. This may in turn lead to the relocation of labor and capital due to the improvement of accessibility of markets or inputs. The result suggests the existence of direct and indirect effects in other regions derived from the improvement of the transportation infrastructure, and confirms the relevance of road freight transport in some regions. We found that the changes in regional employment are substantial for some regions (increasing or decreasing jobs), but a t the same time negligible in other regions. As a result,the approach provides broad guidance to national governments and other transport-related parties about the impacts of this transport policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Crowd induced dynamic loading in large structures, such as gymnasiums or stadiums, is usually modelled as a series of harmonic loads which are defined in terms of their Fourier coefficients. Different values of these Fourier coefficients that were obtained from full scale measurements can be found in codes. Recently, an alternative has been proposed, based on random generation of load time histories that take into account phase lags among individuals inside the crowd. Generally the testing is performed on platforms or structures that can be considered rigid because their natural frequencies are higher than the excitation frequencies associated with crowd loading. In this paper we shall present the testing done on a structure designed to be a gymnasium, which has natural frequencies within that range. In this test the gym slab was instrumented with acceleration sensors and different people jumped on a force plate installed on the floor. Test results have been compared with predictions based on the two abovementioned load modelling alternatives and a new methodology for modelling jumping loads has been proposed in order to reduce the difference between experimental and numerical results at high frequency range.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A 2D computer simulation method of random packings is applied to sets of particles generated by a self-similar uniparametric model for particle size distributions (PSDs) in granular media. The parameter p which controls the model is the proportion of mass of particles corresponding to the left half of the normalized size interval [0,1]. First the influence on the total porosity of the parameter p is analyzed and interpreted. It is shown that such parameter, and the fractal exponent of the associated power scaling, are efficient packing parameters, but this last one is not in the way predicted in a former published work addressing an analogous research in artificial granular materials. The total porosity reaches the minimum value for p = 0.6. Limited information on the pore size distribution is obtained from the packing simulations and by means of morphological analysis methods. Results show that the range of pore sizes increases for decreasing values of p showing also different shape in the volume pore size distribution. Further research including simulations with a greater number of particles and image resolution are required to obtain finer results on the hierarchical structure of pore space.