939 resultados para Global model
Resumo:
The study introduces a new regression model developed to estimate the hourly values of diffuse solar radiation at the surface. The model is based on the clearness index and diffuse fraction relationship, and includes the effects of cloud (cloudiness and cloud type), traditional meteorological variables (air temperature, relative humidity and atmospheric pressure observed at the surface) and air pollution (concentration of particulate matter observed at the surface). The new model is capable of predicting hourly values of diffuse solar radiation better than the previously developed ones (R-2 = 0.93 and RMSE = 0.085). A simple version with a large applicability is proposed that takes into consideration cloud effects only (cloudiness and cloud height) and shows a R-2 = 0.92. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The paleoclimate version of the National Center for Atmospheric Research Community Climate System Model version 3 (NCAR-CCSM3) is used to analyze changes in the water formation rates in the Atlantic, Pacific, and Indian Oceans for the Last Glacial Maximum (LGM), mid-Holocene (MH) and pre-industrial (PI) control climate. During the MH, CCSM3 exhibits a north-south asymmetric response of intermediate water subduction changes in the Atlantic Ocean, with a reduction of 2 Sv in the North Atlantic and an increase of 2 Sv in the South Atlantic relative to PI. During the LGM, there is increased formation of intermediate water and a more stagnant deep ocean in the North Pacific. The production of North Atlantic Deep Water (NADW) is significantly weakened. The NADW is replaced in large extent by enhanced Antarctic Intermediate Water (AAIW), Glacial North Atlantic Intermediate Water (GNAIW), and also by an intensified of Antarctic Bottom Water (AABW), with the latter being a response to the enhanced salinity and ice formation around Antarctica. Most of the LGM intermediate/mode water is formed at 27.4 < sigma(theta) < 29.0 kg/m(3), while for the MH and PI most of the subduction transport occurs at 26.5 < sigma(theta) < 27.4 kg/m(3). The simulated LGM Southern Hemisphere winds are more intense by 0.2-0.4 dyne/cm(2). Consequently, increased Ekman transport drives the production of intermediate water (low salinity) at a larger rate and at higher densities when compared to the other climatic periods.
Resumo:
A data set of a commercial Nellore beef cattle selection program was used to compare breeding models that assumed or not markers effects to estimate the breeding values, when a reduced number of animals have phenotypic, genotypic and pedigree information available. This herd complete data set was composed of 83,404 animals measured for weaning weight (WW), post-weaning gain (PWG), scrotal circumference (SC) and muscle score (MS), corresponding to 116,652 animals in the relationship matrix. Single trait analyses were performed by MTDFREML software to estimate fixed and random effects solutions using this complete data. The additive effects estimated were assumed as the reference breeding values for those animals. The individual observed phenotype of each trait was adjusted for fixed and random effects solutions, except for direct additive effects. The adjusted phenotype composed of the additive and residual parts of observed phenotype was used as dependent variable for models' comparison. Among all measured animals of this herd, only 3160 animals were genotyped for 106 SNP markers. Three models were compared in terms of changes on animals' rank, global fit and predictive ability. Model 1 included only polygenic effects, model 2 included only markers effects and model 3 included both polygenic and markers effects. Bayesian inference via Markov chain Monte Carlo methods performed by TM software was used to analyze the data for model comparison. Two different priors were adopted for markers effects in models 2 and 3, the first prior assumed was a uniform distribution (U) and, as a second prior, was assumed that markers effects were distributed as normal (N). Higher rank correlation coefficients were observed for models 3_U and 3_N, indicating a greater similarity of these models animals' rank and the rank based on the reference breeding values. Model 3_N presented a better global fit, as demonstrated by its low DIC. The best models in terms of predictive ability were models 1 and 3_N. Differences due prior assumed to markers effects in models 2 and 3 could be attributed to the better ability of normal prior in handle with collinear effects. The models 2_U and 2_N presented the worst performance, indicating that this small set of markers should not be used to genetically evaluate animals with no data, since its predictive ability is restricted. In conclusion, model 3_N presented a slight superiority when a reduce number of animals have phenotypic, genotypic and pedigree information. It could be attributed to the variation retained by markers and polygenic effects assumed together and the normal prior assumed to markers effects, that deals better with the collinearity between markers. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The log-Burr XII regression model for grouped survival data is evaluated in the presence of many ties. The methodology for grouped survival data is based on life tables, where the times are grouped in k intervals, and we fit discrete lifetime regression models to the data. The model parameters are estimated by maximum likelihood and jackknife methods. To detect influential observations in the proposed model, diagnostic measures based on case deletion, so-called global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to these measures, the total local influence and influential estimates are also used. We conduct Monte Carlo simulation studies to assess the finite sample behavior of the maximum likelihood estimators of the proposed model for grouped survival. A real data set is analyzed using a regression model for grouped data.
Resumo:
Atmospheric conditions at the site of a cosmic ray observatory must be known for reconstructing observed extensive air showers. The Global Data Assimilation System (GDAS) is a global atmospheric model predicated on meteorological measurements and numerical weather predictions. GDAS provides altitude-dependent profiles of the main state variables of the atmosphere like temperature, pressure, and humidity. The original data and their application to the air shower reconstruction of the Pierre Auger Observatory are described. By comparisons with radiosonde and weather station measurements obtained on-site in Malargue and averaged monthly models, the utility of the GDAS data is shown. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Scientists predict that global agricultural lands will expand over the next few decades due to increasing demands for food production and an exponential increase in crop-based biofuel production. These changes in land use will greatly impact biogeochemical and biogeophysical cycles across the globe. It is therefore important to develop models that can accurately simulate the interactions between the atmosphere and important crops. In this study, we develop and validate a new process-based sugarcane model (included as a module within the Agro-IBIS dynamic agro-ecosystem model) which can be applied at multiple spatial scales. At site level, the model systematically under/overestimated the daily sensible/latent heat flux (by -10.5% and 14.8%, H and E, respectively) when compared against the micrometeorological observations from southeast Brazil. The model underestimated ET (relative bias between -10.1% and 12.5%) when compared against an agro-meteorological field experiment from northeast Australia. At the regional level, the model accurately simulated average yield for the four largest mesoregions (clusters of municipalities) in the state of Sao Paulo, Brazil, over a period of 16 years, with a yield relative bias of -0.68% to 1.08%. Finally, the simulated annual average sugarcane yield over 31 years for the state of Louisiana (US) had a low relative bias (-2.67%), but exhibited a lower interannual variability than the observed yields.
Resumo:
We analyze the global phase diagram of a Maier-Saupe lattice model with the inclusion of shape-disordered degrees of freedom to mimic a mixture of oblate and prolate molecules (discs and cylinders). In the neighborhood of a Landau multicritical point, solutions of the statistical problem can be written as a Landau-de Gennes expansion for the free energy. If the shape-disordered degrees of freedom are quenched, we confirm the existence of a biaxial nematic structure. If orientational and disorder degrees of freedom are allowed to thermalize, this biaxial solution becomes thermodynamically unstable. Also, we use a two-temperature formalism to mimic the presence of two distinct relaxation times, and show that a slight departure from complete thermalization is enough to stabilize a biaxial nematic phase.
Resumo:
O objetivo deste artigo é discutir, de forma interdisciplinar, as condições necessárias e os caminhos para concretizar um modelo econômico e político de maior autonomia no cenário mundial para os países da América do Sul. O enfoque parte das condições políticas e sociais configuradas pela região ao fim da primeira década do presente século, enfatizando o papel do Brasil e considerando as relações econômicas e políticas do contexto internacional, a estabilidade política e o crescente protagonismo da China nos países da região. Analisa ainda aspectos referentes à importância da política social e da política de defesa comum sob a União das Nações Sul-Americanas (Unasul).
Resumo:
This study deals with the reduction of the stiffness in precast concrete structural elements of multi-storey buildings to analyze global stability. Having reviewed the technical literature, this paper present indications of stiffness reduction in different codes, standards, and recommendations and compare these to the values found in the present study. The structural model analyzed in this study was constructed with finite elements using ANSYS® software. Physical Non-Linearity (PNL) was considered in relation to the diagrams M x N x 1/r, and Geometric Non-Linearity (GNL) was calculated following the Newton-Raphson method. Using a typical precast concrete structure with multiple floors and a semi-rigid beam-to-column connection, expressions for a stiffness reduction coefficient are presented. The main conclusions of the study are as follows: the reduction coefficients obtained from the diagram M x N x 1/r differ from standards that use a simplified consideration of PNL; the stiffness reduction coefficient for columns in the arrangements analyzed were approximately 0.5 to 0.6; and the variation of values found for stiffness reduction coefficient in concrete beams, which were subjected to the effects of creep with linear coefficients from 0 to 3, ranged from 0.45 to 0.2 for positive bending moments and 0.3 to 0.2 for negative bending moments.
Resumo:
Cutting tools with higher wear resistance are those manufactured by powder metallurgy process, which combines the development of materials and design properties, features of shape-making technology and sintering. The annual global market of cutting tools consumes about US$ 12 billion; therefore, any research to improve tool designs and machining process techniques adds value or reduces costs. The aim is to describe the Spark Plasma Sintering (SPS) of cutting tools in functionally gradient materials, to show this structure design suitability through thermal residual stress model and, lastly, to present two kinds of inserts. For this, three cutting tool materials were used (Al2O3-ZrO2, Al2O3-TiC and WC-Co). The samples were sintered by SPS at 1300 °C and 70 MPa. The results showed that mechanical and thermal displacements may be separated during thermal treatment for analysis. Besides, the absence of cracks indicated coherence between experimental results and the residual stresses predicted.
Resumo:
La difusividad diapicna en el océano es uno de los parámetros más desconocidos en los modelos climáticos actuales. Su importancia radica en que es uno de los principales factores de transporte de calor hacia capas más profundas del océano. Las medidas de esta difusividad son variables e insuficientes para confeccionar un mapa global con estos valores. A través de una amplia revisión bibliográfica hasta el año 2009 del tema se encontró que el sistema climático es extremadamente sensible a la difusividad diapicna, donde el escalado del Océano Pacífico Sur, con una potencia de su coeficiente de difusividad o kv de 0.63, resultó ser más sensible a los cambios en el coeficiente de difusividad diapicna que el Océano Atlántico con una potencia de kv de 0.44 , se pone de manifiesto así la necesidad de esclarecer los esquemas de mezcla, esquemas de clausura y sus parametrizaciones a través de Modelos de Circulación Global (GCMs) y Modelos de Complejidad Intermedia del Sistema Terrestre (EMICs), dentro del marco de un posible cambio climático y un calentamiento global debido al aumento de las emisiones de gases de efecto invernadero. Así, el objetivo principal de este trabajo es comprender la sensibilidad del sistema climático a la difusividad diapicna en el océano a través de los GCMs y los EMICs. Para esto es necesario el análisis de los posibles esquemas de mezcla diapicna con el objetivo final de encontrar el modelo óptimo que permita predecir la evolución del sistema climático, el estudio de todas las variables que influyen en el mismo, y la correcta simulación en largos periodos de tiempo. The diapycnal diffusivity in the ocean is one of the least known parameters in current climate models. Measurements of this diffusivity are sparse and insufficient for compiling a global map. Through a lengthy review of the literature through 2009 found that the climate system is extremely sensitive to the diapycnal diffusivity, where in the South Pacific scales with the 0.63 power of the diapycnal diffusion, in contrasts to the scales with the 0.44 power of the diapycnal diffusion of North Atlantic. Therefore, the South Pacific is more sensitive than the North Atlantic. All this evidenced the need to clarify the schemes of mixing and its parameterisations through Global Circulation Models (GCMs) and Earth Models of Intermediate Complexity (EMICs) within a context of possible climate change and global warming due to increased of emissions of greenhouse gases. Thus, the main objective of this work understands the sensitivity of the climate system to diapycnal diffusivity in the ocean through the GCMs and EMICs. This requires the analysis of possible schemes of diapycnal mixing with the ultimate goal of finding the optimal model to predict the evolution of the climate system, the study of all variables that affect it and the correct simulation over long periods of time.
Resumo:
Máster en Oceanografía
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
This doctoral work gains deeper insight into the dynamics of knowledge flows within and across clusters, unfolding their features, directions and strategic implications. Alliances, networks and personnel mobility are acknowledged as the three main channels of inter-firm knowledge flows, thus offering three heterogeneous measures to analyze the phenomenon. The interplay between the three channels and the richness of available research methods, has allowed for the elaboration of three different papers and perspectives. The common empirical setting is the IT cluster in Bangalore, for its distinguished features as a high-tech cluster and for its steady yearly two-digit growth around the service-based business model. The first paper deploys both a firm-level and a tie-level analysis, exploring the cases of 4 domestic companies and of 2 MNCs active the cluster, according to a cluster-based perspective. The distinction between business-domain knowledge and technical knowledge emerges from the qualitative evidence, further confirmed by quantitative analyses at tie-level. At firm-level, the specialization degree seems to be influencing the kind of knowledge shared, while at tie-level both the frequency of interaction and the governance mode prove to determine differences in the distribution of knowledge flows. The second paper zooms out and considers the inter-firm networks; particularly focusing on the role of cluster boundary, internal and external networks are analyzed, in their size, long-term orientation and exploration degree. The research method is purely qualitative and allows for the observation of the evolving strategic role of internal network: from exploitation-based to exploration-based. Moreover, a causal pattern is emphasized, linking the evolution and features of the external network to the evolution and features of internal network. The final paper addresses the softer and more micro-level side of knowledge flows: personnel mobility. A social capital perspective is here developed, which considers both employees’ acquisition and employees’ loss as building inter-firm ties, thus enhancing company’s overall social capital. Negative binomial regression analyses at dyad-level test the significant impact of cluster affiliation (cluster firms vs non-cluster firms), industry affiliation (IT firms vs non-IT fims) and foreign affiliation (MNCs vs domestic firms) in shaping the uneven distribution of personnel mobility, and thus of knowledge flows, among companies.
Resumo:
The intensity of regional specialization in specific activities, and conversely, the level of industrial concentration in specific locations, has been used as a complementary evidence for the existence and significance of externalities. Additionally, economists have mainly focused the debate on disentangling the sources of specialization and concentration processes according to three vectors: natural advantages, internal, and external scale economies. The arbitrariness of partitions plays a key role in capturing these effects, while the selection of the partition would have to reflect the actual characteristics of the economy. Thus, the identification of spatial boundaries to measure specialization becomes critical, since most likely the model will be adapted to different scales of distance, and be influenced by different types of externalities or economies of agglomeration, which are based on the mechanisms of interaction with particular requirements of spatial proximity. This work is based on the analysis of the spatial aspect of economic specialization supported by the manufacturing industry case. The main objective is to propose, for discrete and continuous space: i) a measure of global specialization; ii) a local disaggregation of the global measure; and iii) a spatial clustering method for the identification of specialized agglomerations.