952 resultados para Linear models (Statistics)
Resumo:
El presente proyecto tiene como objeto identificar cuáles son los conceptos de salud, enfermedad, epidemiología y riesgo aplicables a las empresas del sector de extracción de petróleo y gas natural en Colombia. Dado, el bajo nivel de predicción de los análisis financieros tradicionales y su insuficiencia, en términos de inversión y toma de decisiones a largo plazo, además de no considerar variables como el riesgo y las expectativas de futuro, surge la necesidad de abordar diferentes perspectivas y modelos integradores. Esta apreciación es pertinente dentro del sector de extracción de petróleo y gas natural, debido a la creciente inversión extranjera que ha reportado, US$2.862 millones en el 2010, cifra mayor a diez veces su valor en el año 2003. Así pues, se podrían desarrollar modelos multi-dimensional, con base en los conceptos de salud financiera, epidemiológicos y estadísticos. El termino de salud y su adopción en el sector empresarial, resulta útil y mantiene una coherencia conceptual, evidenciando una presencia de diferentes subsistemas o factores interactuantes e interconectados. Es necesario mencionar también, que un modelo multidimensional (multi-stage) debe tener en cuenta el riesgo y el análisis epidemiológico ha demostrado ser útil al momento de determinarlo e integrarlo en el sistema junto a otros conceptos, como la razón de riesgo y riesgo relativo. Esto se analizará mediante un estudio teórico-conceptual, que complementa un estudio previo, para contribuir al proyecto de finanzas corporativas de la línea de investigación en Gerencia.
Resumo:
El proyecto de investigación parte de la dinámica del modelo de distribución tercerizada para una compañía de consumo masivo en Colombia, especializada en lácteos, que para este estudio se ha denominado “Lactosa”. Mediante datos de panel con estudio de caso, se construyen dos modelos de demanda por categoría de producto y distribuidor y mediante simulación estocástica, se identifican las variables relevantes que inciden sus estructuras de costos. El problema se modela a partir del estado de resultados por cada uno de los cuatro distribuidores analizados en la región central del país. Se analiza la estructura de costos y el comportamiento de ventas dado un margen (%) de distribución logístico, en función de las variables independientes relevantes, y referidas al negocio, al mercado y al entorno macroeconómico, descritas en el objeto de estudio. Entre otros hallazgos, se destacan brechas notorias en los costos de distribución y costos en la fuerza de ventas, pese a la homogeneidad de segmentos. Identifica generadores de valor y costos de mayor dispersión individual y sugiere uniones estratégicas de algunos grupos de distribuidores. La modelación con datos de panel, identifica las variables relevantes de gestión que inciden sobre el volumen de ventas por categoría y distribuidor, que focaliza los esfuerzos de la dirección. Se recomienda disminuir brechas y promover desde el productor estrategias focalizadas a la estandarización de procesos internos de los distribuidores; promover y replicar los modelos de análisis, sin pretender remplazar conocimiento de expertos. La construcción de escenarios fortalece de manera conjunta y segura la posición competitiva de la compañía y sus distribuidores.
Resumo:
Los resultados financieros de las organizaciones son objeto de estudio y análisis permanente, predecir sus comportamientos es una tarea permanente de empresarios, inversionistas, analistas y académicos. En el presente trabajo se explora el impacto del tamaño de los activos (valor total de los activos) en la cuenta de resultados operativos y netos, analizando inicialmente la relación entre dichas variables con indicadores tradicionales del análisis financiero como es el caso de la rentabilidad operativa y neta y con elementos de estadística descriptiva que permiten calificar los datos utilizados como lineales o no lineales. Descubriendo posteriormente que los resultados financieros de las empresas vigiladas por la Superintendencia de Sociedades para el año 2012, tienen un comportamiento no lineal, de esta manera se procede a analizar la relación de los activos y los resultados con la utilización de espacios de fase y análisis de recurrencia, herramientas útiles para sistemas caóticos y complejos. Para el desarrollo de la investigación y la revisión de la relación entre las variables de activos y resultados financieros se tomó como fuente de información los reportes financieros del cierre del año 2012 de la Superintendencia de Sociedades (Superintendencia de Sociedades, 2012).
Resumo:
Se presenta el análisis de sensibilidad de un modelo de percepción de marca y ajuste de la inversión en marketing desarrollado en el Laboratorio de Simulación de la Universidad del Rosario. Este trabajo de grado consta de una introducción al tema de análisis de sensibilidad y su complementario el análisis de incertidumbre. Se pasa a mostrar ambos análisis usando un ejemplo simple de aplicación del modelo mediante la aplicación exhaustiva y rigurosa de los pasos descritos en la primera parte. Luego se hace una discusión de la problemática de medición de magnitudes que prueba ser el factor más complejo de la aplicación del modelo en el contexto práctico y finalmente se dan conclusiones sobre los resultados de los análisis.
Resumo:
El presente trabajo se enfoca en el análisis de las acciones de Ecopetrol, empresa representativa del mercado de Extracción de Petróleo y Gas natural en Colombia (SP&G), durante el periodo, del 22 de mayo de 2012 al 30 de agosto de 2013. Durante este espacio de tiempo la acción sufrió una serie de variaciones en su precio las cuales se relacionaban a la nueva emisión de acciones que realizo la Compañía. Debido a este cambio en el comportamiento del activo se generaron una serie de interrogantes sobre, (i) la reacción del mercado ante diferentes sucesos ocurridos dentro de las firmas y en su entorno (ii) la capacidad de los modelos financieros de predecir y entender las posibles reacciones observadas de los activos (entendidos como deuda). Durante el desarrollo del presente trabajo se estudiará la pertinencia del mismo, en línea con los objetivos y desarrollos de la Escuela de Administración de la Universidad del Rosario. Puntualmente en temas de Perdurabilidad direccionados a la línea de Gerencia. Donde el entendimiento de la deuda como parte del funcionamiento actual y como variable determinante para el comportamiento futuro de las organizaciones tiene especial importancia. Una vez se clarifica la relación entre el presente trabajo y la Universidad, se desarrollan diferentes conceptos y teorías financieras que han permitido conocer y estudiar de manera más específica el mercado, con el objetivo de reducir los riesgos de las inversiones realizadas. Éste análisis se desarrolla en dos partes: (i) modelos de tiempo discreto y (ii) modelos de tiempo continúo. Una vez se tiene mayor claridad sobre los modelos estudiados hasta el momento se realiza el respectivo análisis de los datos mediante modelos de caos y análisis recurrente los cuales nos permiten entender que las acciones se comportan de manera caótica pero que establecen ciertas relaciones entre los precios actuales y los históricos, desarrollando comportamientos definidos entre los precios, las cantidades, el entorno macroeconómico y la organización. De otra parte, se realiza una descripción del mercado de petróleo en Colombia y se estudia a Ecopetrol como empresa y eje principal del mercado descrito en el país. La compañía Ecopetrol es representativa debido a que es uno de los mayores aportantes fiscales del país, pues sus ingresos se desprenden de bienes que se encuentran en el subsuelo por lo que la renta petrolera incluye impuestos a la producción transformación y consumo (Ecopetrol, 2003). Por último, se presentan los resultados del trabajo, así como el análisis que da lugar para presentar ciertas recomendaciones a partir de lo observado.
Resumo:
This paper analyzes the measure of systemic importance ∆CoV aR proposed by Adrian and Brunnermeier (2009, 2010) within the context of a similar class of risk measures used in the risk management literature. In addition, we develop a series of testing procedures, based on ∆CoV aR, to identify and rank the systemically important institutions. We stress the importance of statistical testing in interpreting the measure of systemic importance. An empirical application illustrates the testing procedures, using equity data for three European banks.
Resumo:
We examine the long-run relationship between the parallel and the official exchange rate in Colombia over two regimes; a crawling peg period and a more flexible crawling band one. The short-run adjustment process of the parallel rate is examined both in a linear and a nonlinear context. We find that the change from the crawling peg to the crawling band regime did not affect the long-run relationship between the official and parallel exchange rates, but altered the short-run dynamics. Non-linear adjustment seems appropriate for the first period, mainly due to strict foreign controls that cause distortions in the transition back to equilibrium once disequilibrium occurs
Resumo:
Detailed knowledge of waterfowl abundance and distribution across Canada is lacking, which limits our ability to effectively conserve and manage their populations. We used 15 years of data from an aerial transect survey to model the abundance of 17 species or species groups of ducks within southern and boreal Canada. We included 78 climatic, hydrological, and landscape variables in Boosted Regression Tree models, allowing flexible response curves and multiway interactions among variables. We assessed predictive performance of the models using four metrics and calculated uncertainty as the coefficient of variation of predictions across 20 replicate models. Maps of predicted relative abundance were generated from resulting models, and they largely match spatial patterns evident in the transect data. We observed two main distribution patterns: a concentrated prairie-parkland distribution and a more dispersed pan-Canadian distribution. These patterns were congruent with the relative importance of predictor variables and model evaluation statistics among the two groups of distributions. Most species had a hydrological variable as the most important predictor, although the specific hydrological variable differed somewhat among species. In some cases, important variables had clear ecological interpretations, but in some instances, e.g., topographic roughness, they may simply reflect chance correlations between species distributions and environmental variables identified by the model-building process. Given the performance of our models, we suggest that the resulting prediction maps can be used in future research and to guide conservation activities, particularly within the bounds of the survey area.
Resumo:
The linear viscoelastic (LVE) spectrum is one of the primary fingerprints of polymer solutions and melts, carrying information about most relaxation processes in the system. Many single chain theories and models start with predicting the LVE spectrum to validate their assumptions. However, until now, no reliable linear stress relaxation data were available from simulations of multichain systems. In this work, we propose a new efficient way to calculate a wide variety of correlation functions and mean-square displacements during simulations without significant additional CPU cost. Using this method, we calculate stress−stress autocorrelation functions for a simple bead−spring model of polymer melt for a wide range of chain lengths, densities, temperatures, and chain stiffnesses. The obtained stress−stress autocorrelation functions were compared with the single chain slip−spring model in order to obtain entanglement related parameters, such as the plateau modulus or the molecular weight between entanglements. Then, the dependence of the plateau modulus on the packing length is discussed. We have also identified three different contributions to the stress relaxation: bond length relaxation, colloidal and polymeric. Their dependence on the density and the temperature is demonstrated for short unentangled systems without inertia.
Resumo:
A stochastic parameterization scheme for deep convection is described, suitable for use in both climate and NWP models. Theoretical arguments and the results of cloud-resolving models, are discussed in order to motivate the form of the scheme. In the deterministic limit, it tends to a spectrum of entraining/detraining plumes and is similar to other current parameterizations. The stochastic variability describes the local fluctuations about a large-scale equilibrium state. Plumes are drawn at random from a probability distribution function (pdf) that defines the chance of finding a plume of given cloud-base mass flux within each model grid box. The normalization of the pdf is given by the ensemble-mean mass flux, and this is computed with a CAPE closure method. The characteristics of each plume produced are determined using an adaptation of the plume model from the Kain-Fritsch parameterization. Initial tests in the single column version of the Unified Model verify that the scheme is effective in producing the desired distributions of convective variability without adversely affecting the mean state.
Resumo:
A new technique is described for the analysis of cloud-resolving model simulations, which allows one to investigate the statistics of the lifecycles of cumulus clouds. Clouds are tracked from timestep-to-timestep within the model run. This allows for a very simple method of tracking, but one which is both comprehensive and robust. An approach for handling cloud splits and mergers is described which allows clouds with simple and complicated time histories to be compared within a single framework. This is found to be important for the analysis of an idealized simulation of radiative-convective equilibrium, in which the moist, buoyant, updrafts (i.e., the convective cores) were tracked. Around half of all such cores were subject to splits and mergers during their lifecycles. For cores without any such events, the average lifetime is 30min, but events can lengthen the typical lifetime considerably.
Resumo:
The goal of this study is to evaluate the effect of mass lumping on the dispersion properties of four finite-element velocity/surface-elevation pairs that are used to approximate the linear shallow-water equations. For each pair, the dispersion relation, obtained using the mass lumping technique, is computed and analysed for both gravity and Rossby waves. The dispersion relations are compared with those obtained for the consistent schemes (without lumping) and the continuous case. The P0-P1, RT0 and P-P1 pairs are shown to preserve good dispersive properties when the mass matrix is lumped. Test problems to simulate fast gravity and slow Rossby waves are in good agreement with the analytical results.
Resumo:
In this paper, the available potential energy (APE) framework of Winters et al. (J. Fluid Mech., vol. 289, 1995, p. 115) is extended to the fully compressible Navier– Stokes equations, with the aims of clarifying (i) the nature of the energy conversions taking place in turbulent thermally stratified fluids; and (ii) the role of surface buoyancy fluxes in the Munk & Wunsch (Deep-Sea Res., vol. 45, 1998, p. 1977) constraint on the mechanical energy sources of stirring required to maintain diapycnal mixing in the oceans. The new framework reveals that the observed turbulent rate of increase in the background gravitational potential energy GPEr , commonly thought to occur at the expense of the diffusively dissipated APE, actually occurs at the expense of internal energy, as in the laminar case. The APE dissipated by molecular diffusion, on the other hand, is found to be converted into internal energy (IE), similar to the viscously dissipated kinetic energy KE. Turbulent stirring, therefore, does not introduce a new APE/GPEr mechanical-to-mechanical energy conversion, but simply enhances the existing IE/GPEr conversion rate, in addition to enhancing the viscous dissipation and the entropy production rates. This, in turn, implies that molecular diffusion contributes to the dissipation of the available mechanical energy ME =APE +KE, along with viscous dissipation. This result has important implications for the interpretation of the concepts of mixing efficiency γmixing and flux Richardson number Rf , for which new physically based definitions are proposed and contrasted with previous definitions. The new framework allows for a more rigorous and general re-derivation from the first principles of Munk & Wunsch (1998, hereafter MW98)’s constraint, also valid for a non-Boussinesq ocean: G(KE) ≈ 1 − ξ Rf ξ Rf Wr, forcing = 1 + (1 − ξ )γmixing ξ γmixing Wr, forcing , where G(KE) is the work rate done by the mechanical forcing, Wr, forcing is the rate of loss of GPEr due to high-latitude cooling and ξ is a nonlinearity parameter such that ξ =1 for a linear equation of state (as considered by MW98), but ξ <1 otherwise. The most important result is that G(APE), the work rate done by the surface buoyancy fluxes, must be numerically as large as Wr, forcing and, therefore, as important as the mechanical forcing in stirring and driving the oceans. As a consequence, the overall mixing efficiency of the oceans is likely to be larger than the value γmixing =0.2 presently used, thereby possibly eliminating the apparent shortfall in mechanical stirring energy that results from using γmixing =0.2 in the above formula.
Resumo:
1. We compared the baseline phosphorus (P) concentrations inferred by diatom-P transfer functions and export coefficient models at 62 lakes in Great Britain to assess whether the techniques produce similar estimates of historical nutrient status. 2. There was a strong linear relationship between the two sets of values over the whole total P (TP) gradient (2-200 mu g TP L-1). However, a systematic bias was observed with the diatom model producing the higher values in 46 lakes (of which values differed by more than 10 mu g TP L-1 in 21). The export coefficient model gave the higher values in 10 lakes (of which the values differed by more than 10 mu g TP L-1 in only 4). 3. The difference between baseline and present-day TP concentrations was calculated to compare the extent of eutrophication inferred by the two sets of model output. There was generally poor agreement between the amounts of change estimated by the two approaches. The discrepancy in both the baseline values and the degree of change inferred by the models was greatest in the shallow and more productive sites. 4. Both approaches were applied to two lakes in the English Lake District where long-term P data exist, to assess how well the models track measured P concentrations since approximately 1850. There was good agreement between the pre-enrichment TP concentrations generated by the models. The diatom model paralleled the steeper rise in maximum soluble reactive P (SRP) more closely than the gradual increase in annual mean TP in both lakes. The export coefficient model produced a closer fit to observed annual mean TP concentrations for both sites, tracking the changes in total external nutrient loading. 5. A combined approach is recommended, with the diatom model employed to reflect the nature and timing of the in-lake response to changes in nutrient loading, and the export coefficient model used to establish the origins and extent of changes in the external load and to assess potential reduction in loading under different management scenarios. 6. However, caution must be exercised when applying these models to shallow lakes where the export coefficient model TP estimate will not include internal P loading from lake sediments and where the diatom TP inferences may over-estimate TP concentrations because of the high abundance of benthic taxa, many of which are poor indicators of trophic state.
Resumo:
Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.