991 resultados para GENERAL COEFFICIENT
Resumo:
[Support Institutions:] Department of Administration of Health, University of Montreal, Canada Public Health School of Fudan University, Shanghai, China
Resumo:
We estimate the impact of regulatory heterogeneity on agri-food trade using a gravity analysis that relies on detailed data on non-tariff measures (NTMs) collected by the NTM-Impact project. The data cover a broad range of import requirements for agricultural and food products for the EU and nine of its major trade partners. We find that trade is significantly reduced when importing countries have stricter maximum residue limits (MRLs) for plant products than exporting countries. For most other measures, due to their qualitative nature, we were unable to infer whether the importer has stricter standards relative to the exporter, and we do not find a robust relationship between these measures and trade. Our findings suggest that, at least for some import standards, harmonising regulations will increase trade. We also conclude that tariff reductions remain an effective means to increase trade even when NTMs abound.
Resumo:
The aim of the present study was to trace the mortality profile of the elderly in Brazil using two neighboring age groups: 60 to 69 years (young-old) and 80 years or more (oldest-old). To do this, we sought to characterize the trend and distinctions of different mortality profiles, as well as the quality of the data and associations with socioeconomic and sanitary conditions in the micro-regions of Brazil. Data was collected from the Mortality Information System (SIM) and the Brazilian Institute of Geography and Statistics (IBGE). Based on these data, the coefficients of mortality were calculated for the chapters of the International Disease Classification (ICD-10). A polynomial regression model was used to ascertain the trend of the main chapters. Non-hierarchical cluster analysis (K-Means) was used to obtain the profiles for different Brazilian micro-regions. Factorial analysis of the contextual variables was used to obtain the socio-economic and sanitary deprivation indices (IPSS). The trend of the CMId and of the ratio of its values in the two age groups confirmed a decrease in most of the indicators, particularly for badly-defined causes among the oldest-old. Among the young-old, the following profiles emerged: the Development Profile; the Modernity Profile; the Epidemiological Paradox Profile and the Ignorance Profile. Among the oldest-old, the latter three profiles were confirmed, in addition to the Low Mortality Rates Profile. When comparing the mean IPSS values in global terms, all of the groups were different in both of the age groups. The Ignorance Profile was compared with the other profiles using orthogonal contrasts. This profile differed from all of the others in isolation and in clusters. However, the mean IPSS was similar for the Low Mortality Rates Profile among the oldest-old. Furthermore, associations were found between the data quality indicators, the CMId for badly-defined causes, the general coefficient of mortality for each age group (CGMId) and the IPSS of the micro-regions. The worst rates were recorded in areas with the greatest socioeconomic and sanitary deprivation. The findings of the present study show that, despite the decrease in the mortality coefficients, there are notable differences in the profiles related to contextual conditions, including regional differences in data quality. These differences increase the vulnerability of the age groups studied and the health iniquities that are already present.
Resumo:
An analysis of the energy budget for the general case of a body translating in a stationary fluid under the action of an external force is used to define a power loss coefficient. This universal definition of power loss coefficient gives a measure of the energy lost in the wake of the translating body and, in general, is applicable to a variety of flow configurations including active drag reduction, self-propulsion and thrust generation. The utility of the power loss coefficient is demonstrated on a model bluff body flow problem concerning a two-dimensional elliptical cylinder in a uniform cross-flow. The upper and lower boundaries of the elliptic cylinder undergo continuous motion due to a prescribed reflectionally symmetric constant tangential surface velocity. It is shown that a decrease in drag resulting from an increase in the strength of tangential surface velocity leads to an initial reduction and eventual rise in the power loss coefficient. A maximum in energetic efficiency is attained for a drag reducing tangential surface velocity which minimizes the power loss coefficient. The effect of the tangential surface velocity on drag reduction and self-propulsion of both bluff and streamlined bodies is explored through a variation in the thickness ratio (ratio of the minor and major axes) of the elliptical cylinders.
Resumo:
The problem of the concentration jump of a vapour in the vicinity of a plane wall, which consists of the condensed phase of the vapour, in a rarefied gas mixture of that vapour (A) and another 'inert' gas (B), is considered. The general formulation of the problem of determining the concentration-jump coefficient for dA is given. In the Knudsen layer the simplest model of Boley-Yip theory is used to simplify the Boltzmann equations for the binary gas mixture. The numerical calculation of the concentration jump coefficient for dA for various values of evaporation coefficient of A is illustrated for the case of the equilibrium concentration of B being much greater than that of A, for which experimental data are available.
Resumo:
The ideal free distribution model which relates the spatial distribution of mobile consumers to that of their resource is shown to be a limiting case of a more general model which we develop using simple concepts of diffusion. We show how the ideal free distribution model can be derived from a more general model and extended by incorporating simple models of social influences on predator spacing. First, a free distribution model based on patch switching rules, with a power-law interference term, which represents instantaneous biased diffusion is derived. A social bias term is then introduced to represent the effect of predator aggregation on predator fitness, separate from any effects which act through intake rate. The social bias term is expanded to express an optimum spacing for predators and example solutions of the resulting biased diffusion models are shown. The model demonstrates how an empirical interference coefficient, derived from measurements of predator and prey densities, may include factors expressing the impact of social spacing behaviour on fitness. We conclude that empirical values of log predator/log prey ratio may contain information about more than the relationship between consumer and resource densities. Unlike many previous models, the model shown here applies to conditions without continual input. (C) 1997 Academic Press Limited.</p>
Resumo:
Social networks generally display a positively skewed degree distribution and higher values for clustering coefficient and degree assortativity than would be expected from the degree sequence. For some types of simulation studies, these properties need to be varied in the artificial networks over which simulations are to be conducted. Various algorithms to generate networks have been described in the literature but their ability to control all three of these network properties is limited. We introduce a spatially constructed algorithm that generates networks with constrained but arbitrary degree distribution, clustering coefficient and assortativity. Both a general approach and specific implementation are presented. The specific implementation is validated and used to generate networks with a constrained but broad range of property values. © Copyright JASSS.
Resumo:
Resumen tomado de la publicaci??n
Resumo:
The global cycle of multicomponent aerosols including sulfate, black carbon (BC),organic matter (OM), mineral dust, and sea salt is simulated in the Laboratoire de Me´te´orologie Dynamique general circulation model (LMDZT GCM). The seasonal open biomass burning emissions for simulation years 2000–2001 are scaled from climatological emissions in proportion to satellite detected fire counts. The emissions of dust and sea salt are parameterized online in the model. The comparison of model-predicted monthly mean aerosol optical depth (AOD) at 500 nm with Aerosol Robotic Network (AERONET) shows good agreement with a correlation coefficient of 0.57(N = 1324) and 76% of data points falling within a factor of 2 deviation. The correlation coefficient for daily mean values drops to 0.49 (N = 23,680). The absorption AOD (ta at 670 nm) estimated in the model is poorly correlated with measurements (r = 0.27, N = 349). It is biased low by 24% as compared to AERONET. The model reproduces the prominent features in the monthly mean AOD retrievals from Moderate Resolution Imaging Spectroradiometer (MODIS). The agreement between the model and MODIS is better over source and outflow regions (i.e., within a factor of 2).There is an underestimation of the model by up to a factor of 3 to 5 over some remote oceans. The largest contribution to global annual average AOD (0.12 at 550 nm) is from sulfate (0.043 or 35%), followed by sea salt (0.027 or 23%), dust (0.026 or 22%),OM (0.021 or 17%), and BC (0.004 or 3%). The atmospheric aerosol absorption is predominantly contributed by BC and is about 3% of the total AOD. The globally and annually averaged shortwave (SW) direct aerosol radiative perturbation (DARP) in clear-sky conditions is �2.17 Wm�2 and is about a factor of 2 larger than in all-sky conditions (�1.04 Wm�2). The net DARP (SW + LW) by all aerosols is �1.46 and �0.59 Wm�2 in clear- and all-sky conditions, respectively. Use of realistic, less absorbing in SW, optical properties for dust results in negative forcing over the dust-dominated regions.
Resumo:
The study of the association between two random variables that have a joint normal distribution is of interest in applied statistics; for example, in statistical genetics. This article, targeted to applied statisticians, addresses inferences about the coefficient of correlation (ρ) in the bivariate normal and standard bivariate normal distributions using likelihood, frequentist, and Baycsian perspectives. Some results are surprising. For instance, the maximum likelihood estimator and the posterior distribution of ρ in the standard bivariate normal distribution do not follow directly from results for a general bivariate normal distribution. An example employing bootstrap and rejection sampling procedures is used to illustrate some of the peculiarities.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The boundary element method (BEM) has been applied successfully to many engineering problems during the last decades. Compared with domain type methods like the finite element method (FEM) or the finite difference method (FDM) the BEM can handle problems where the medium extends to infinity much easier than domain type methods as there is no need to develop special boundary conditions (quiet or absorbing boundaries) or infinite elements at the boundaries introduced to limit the domain studied. The determination of the dynamic stiffness of arbitrarily shaped footings is just one of these fields where the BEM has been the method of choice, especially in the 1980s. With the continuous development of computer technology and the available hardware equipment the size of the problems under study grew and, as the flop count for solving the resulting linear system of equations grows with the third power of the number of equations, there was a need for the development of iterative methods with better performance. In [1] the GMRES algorithm was presented which is now widely used for implementations of the collocation BEM. While the FEM results in sparsely populated coefficient matrices, the BEM leads, in general, to fully or densely populated ones, depending on the number of subregions, posing a serious memory problem even for todays computers. If the geometry of the problem permits the surface of the domain to be meshed with equally shaped elements a lot of the resulting coefficients will be calculated and stored repeatedly. The present paper shows how these unnecessary operations can be avoided reducing the calculation time as well as the storage requirement. To this end a similar coefficient identification algorithm (SCIA), has been developed and implemented in a program written in Fortran 90. The vertical dynamic stiffness of a single pile in layered soil has been chosen to test the performance of the implementation. The results obtained with the 3-d model may be compared with those obtained with an axisymmetric formulation which are considered to be the reference values as the mesh quality is much better. The entire 3D model comprises more than 35000 dofs being a soil region with 21168 dofs the biggest single region. Note that the memory necessary to store all coefficients of this single region is about 6.8 GB, an amount which is usually not available with personal computers. In the problem under study the interface zone between the two adjacent soil regions as well as the surface of the top layer may be meshed with equally sized elements. In this case the application of the SCIA leads to an important reduction in memory requirements. The maximum memory used during the calculation has been reduced to 1.2 GB. The application of the SCIA thus permits problems to be solved on personal computers which otherwise would require much more powerful hardware.
Resumo:
This paper shows the Gini Coefficient, the dissimilarity Index and the Lorenz Curve for the Spanish Port System by type of goods from 1960 to the year 2010 for business units: Total traffic, Liquid bulk cargo, Solid bulk cargo, General Merchandise and Container (TEUs) with the aim of carcaterizar the Spanish port systems in these periods and propose future strategies.
Resumo:
In previous statnotes, the application of correlation and regression methods to the analysis of two variables (X,Y) was described. The most important statistic used to measure the degree of correlation between two variables is Pearson’s ‘product moment correlation coefficient’ (‘r’). The correlation between two variables may be due to their common relation to other variables. Hence, investigators using correlation studies need to be alert to the possibilities of spurious correlation and the methods of ‘partial correlation’ are one method of taking this into account. This statnote applies the methods of partial correlation to three scenarios. First, to a fairly obvious example of a spurious correlation resulting from the ‘size effect’ involving the relationship between the number of general practitioners (GP) and the number of deaths of patients in a town. Second, to the relationship between the abundance of the nitrogen-fixing bacterium Azotobacter in soil and three soil variables, and finally, to a more complex scenario, first introduced in Statnote 24involving the relationship between the growth of lichens in the field and climate.
Resumo:
We examine how the most prevalent stochastic properties of key financial time series have been affected during the recent financial crises. In particular we focus on changes associated with the remarkable economic events of the last two decades in the volatility dynamics, including the underlying volatility persistence and volatility spillover structure. Using daily data from several key stock market indices, the results of our bivariate GARCH models show the existence of time varying correlations as well as time varying shock and volatility spillovers between the returns of FTSE and DAX, and those of NIKKEI and Hang Seng, which became more prominent during the recent financial crisis. Our theoretical considerations on the time varying model which provides the platform upon which we integrate our multifaceted empirical approaches are also of independent interest. In particular, we provide the general solution for time varying asymmetric GARCH specifications, which is a long standing research topic. This enables us to characterize these models by deriving, first, their multistep ahead predictors, second, the first two time varying unconditional moments, and third, their covariance structure.