940 resultados para An eddy-resolving ocean model simulation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this research is to develop a tool that could allow to organize coopetitional relationships between organizations on the basis of two-sided Internet platform. The main result of current master thesis is a detailed description of the concept of the lead generating internet platform-based coopetition. With the tools of agent-based modelling and simulation, there were obtained results that could be used as a base for suggestion that the developed concept is able to cause a positive effect on some particular industries (e.g. web-design studios market) and potentially can bring some benefits and extra profitability for most companies that operate on this particular industry. Also on the basis of the results it can be assumed that the developed instrument is also able to increase the degree of transparency of the market to which it is applied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nous présentons un modèle pour l’irradiance solaire spectrale entre 200 et 400 nm. Celui-ci est une extension d’un modèle d’irradiance solaire totale basé sur la simulation de la fragmentation et l’érosion des taches qui utilise, en entrée, les positions et aires des taches observées pour chaque pas de temps d’une journée. L’émergence des taches sur la face du Soleil opposée à la Terre est simulée par une injection stochastique. Le modèle simule ensuite leur désintégration, qui produit des taches plus petites et des facules. Par la suite, l’irradiance est calculée en sommant la contribution des taches, des facules et du Soleil inactif. Les paramètres libres du modèle sont ajustés en comparant les séquences temporelles produites avec les données provenant de divers satellites s’étalant sur trois cycles d’activité. Le modèle d’irradiance spectrale, quant à lui, a été obtenu en modifiant le calcul de la contribution des taches et des facules, ainsi que celle du Soleil inactif, afin de tenir compte de leur dépendance spectrale. Le flux de la photosphère inactive est interpolé sur un spectre synthétique non magnétisé, alors que le contraste des taches est obtenu en calculant le rapport du flux provenant d’un spectre synthétique représentatif des taches et de celui provenant du spectre représentatif du Soleil inactif. Le contraste des facules est quand à lui calculé avec une procédure simple d’inversion de corps noir. Cette dernière nécessite l’utilisation d’un profil de température des facules obtenu à l’aide de modèles d’atmosphère. Les données produites avec le modèle d’irradiance spectrale sont comparées aux observations de SOLSTICE sur UARS. L’accord étant peu satisfaisant, particulièrement concernant le niveau d’irradiance minimal ainsi que l’amplitude des variations, des corrections sont appliquées sur le flux du Soleil inactif, sur le profil de température des facules, ainsi qu’à la dépendance centre-bord du contraste des facules. Enfin, un profil de température des facules est reconstruit empiriquement en maximisant l’accord avec les observations grâce à un algorithme génétique. Il est utilisé afin de reconstruire les séquences temporelles d’irradiance jusqu’en 1874 à des longueurs d’ondes d’intérêt pour la chimie et la dynamique stratosphérique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les modèles de réflexion complexes, avec leurs nombreux paramètres dont certains restent non intuitifs, sont difficiles à contrôler pour obtenir une apparence désirée. De plus, même si un artiste peut plus aisément comprendre la forme de la micro-géométrie d'une surface, sa modélisation en 3D et sa simulation en 4D demeurent extrêmement fastidieuses et coûteuses en mémoire. Nous proposons une solution intermédiaire, où l'artiste représente en 2D une coupe dans un matériau, en dessinant une micro-géométrie de surface en multi-couches. Une simulation efficace par lancer de rayons en seulement 2D capture les distributions de lumière affectées par les micro-géométries. La déviation hors-plan est calculée automatiquement de façon probabiliste en fonction de la normale au point d'intersection et de la direction du rayon incident. Il en résulte des BRDFs isotropes complètes et complexes, simulées à des vitesses interactives, et permettant ainsi une édition interactive de l'apparence de réflectances riches et variées.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Severe local storms, including tornadoes, damaging hail and wind gusts, frequently occur over the eastern and northeastern states of India during the pre-monsoon season (March-May). Forecasting thunderstorms is one of the most difficult tasks in weather prediction, due to their rather small spatial and temporal extension and the inherent non-linearity of their dynamics and physics. In this paper, sensitivity experiments are conducted with the WRF-NMM model to test the impact of convective parameterization schemes on simulating severe thunderstorms that occurred over Kolkata on 20 May 2006 and 21 May 2007 and validated the model results with observation. In addition, a simulation without convective parameterization scheme was performed for each case to determine if the model could simulate the convection explicitly. A statistical analysis based on mean absolute error, root mean square error and correlation coefficient is performed for comparisons between the simulated and observed data with different convective schemes. This study shows that the prediction of thunderstorm affected parameters is sensitive to convective schemes. The Grell-Devenyi cloud ensemble convective scheme is well simulated the thunderstorm activities in terms of time, intensity and the region of occurrence of the events as compared to other convective schemes and also explicit scheme

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the theoretical investigation of local phenomena (adsorption at surfaces, defects or impurities within a crystal, etc.) one can assume that the effects caused by the local disturbance are only limited to the neighbouring particles. With this model, that is well-known as cluster-approximation, an infinite system can be simulated by a much smaller segment of the surface (Cluster). The size of this segment varies strongly for different systems. Calculations to the convergence of bond distance and binding energy of an adsorbed aluminum atom on an Al(100)-surface showed that more than 100 atoms are necessary to get a sufficient description of surface properties. However with a full-quantummechanical approach these system sizes cannot be calculated because of the effort in computer memory and processor speed. Therefore we developed an embedding procedure for the simulation of surfaces and solids, where the whole system is partitioned in several parts which itsself are treated differently: the internal part (cluster), which is located near the place of the adsorbate, is calculated completely self-consistently and is embedded into an environment, whereas the influence of the environment on the cluster enters as an additional, external potential to the relativistic Kohn-Sham-equations. The basis of the procedure represents the density functional theory. However this means that the choice of the electronic density of the environment constitutes the quality of the embedding procedure. The environment density was modelled in three different ways: atomic densities; of a large prepended calculation without embedding transferred densities; bulk-densities (copied). The embedding procedure was tested on the atomic adsorptions of 'Al on Al(100) and Cu on Cu(100). The result was that if the environment is choices appropriately for the Al-system one needs only 9 embedded atoms to reproduce the results of exact slab-calculations. For the Cu-system first calculations without embedding procedures were accomplished, with the result that already 60 atoms are sufficient as a surface-cluster. Using the embedding procedure the same values with only 25 atoms were obtained. This means a substantial improvement if one takes into consideration that the calculation time increased cubically with the number of atoms. With the embedding method Infinite systems can be treated by molecular methods. Additionally the program code was extended by the possibility to make molecular-dynamic simulations. Now it is possible apart from the past calculations of fixed cores to investigate also structures of small clusters and surfaces. A first application we made with the adsorption of Cu on Cu(100). We calculated the relaxed positions of the atoms that were located close to the adsorption site and afterwards made the full-quantummechanical calculation of this system. We did that procedure for different distances to the surface. Thus a realistic adsorption process could be examined for the first time. It should be remarked that when doing the Cu reference-calculations (without embedding) we begun to parallelize the entire program code. Only because of this aspect the investigations for the 100 atomic Cu surface-clusters were possible. Due to the good efficiency of both the parallelization and the developed embedding procedure we will be able to apply the combination in future. This will help to work on more these areas it will be possible to bring in results of full-relativistic molecular calculations, what will be very interesting especially for the regime of heavy systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ongoing depletion of the coastal aquifer in the Gaza strip due to groundwater overexploitation has led to the process of seawater intrusion, which is continually becoming a serious problem in Gaza, as the seawater has further invaded into many sections along the coastal shoreline. As a first step to get a hold on the problem, the artificial neural network (ANN)-model has been applied as a new approach and an attractive tool to study and predict groundwater levels without applying physically based hydrologic parameters, and also for the purpose to improve the understanding of complex groundwater systems and which is able to show the effects of hydrologic, meteorological and anthropogenic impacts on the groundwater conditions. Prediction of the future behaviour of the seawater intrusion process in the Gaza aquifer is thus of crucial importance to safeguard the already scarce groundwater resources in the region. In this study the coupled three-dimensional groundwater flow and density-dependent solute transport model SEAWAT, as implemented in Visual MODFLOW, is applied to the Gaza coastal aquifer system to simulate the location and the dynamics of the saltwater–freshwater interface in the aquifer in the time period 2000-2010. A very good agreement between simulated and observed TDS salinities with a correlation coefficient of 0.902 and 0.883 for both steady-state and transient calibration is obtained. After successful calibration of the solute transport model, simulation of future management scenarios for the Gaza aquifer have been carried out, in order to get a more comprehensive view of the effects of the artificial recharge planned in the Gaza strip for some time on forestall, or even to remedy, the presently existing adverse aquifer conditions, namely, low groundwater heads and high salinity by the end of the target simulation period, year 2040. To that avail, numerous management scenarios schemes are examined to maintain the ground water system and to control the salinity distributions within the target period 2011-2040. In the first, pessimistic scenario, it is assumed that pumping from the aquifer continues to increase in the near future to meet the rising water demand, and that there is not further recharge to the aquifer than what is provided by natural precipitation. The second, optimistic scenario assumes that treated surficial wastewater can be used as a source of additional artificial recharge to the aquifer which, in principle, should not only lead to an increased sustainable yield of the latter, but could, in the best of all cases, revert even some of the adverse present-day conditions in the aquifer, i.e., seawater intrusion. This scenario has been done with three different cases which differ by the locations and the extensions of the injection-fields for the treated wastewater. The results obtained with the first (do-nothing) scenario indicate that there will be ongoing negative impacts on the aquifer, such as a higher propensity for strong seawater intrusion into the Gaza aquifer. This scenario illustrates that, compared with 2010 situation of the baseline model, at the end of simulation period, year 2040, the amount of saltwater intrusion into the coastal aquifer will be increased by about 35 %, whereas the salinity will be increased by 34 %. In contrast, all three cases of the second (artificial recharge) scenario group can partly revert the present seawater intrusion. From the water budget point of view, compared with the first (do nothing) scenario, for year 2040, the water added to the aquifer by artificial recharge will reduces the amount of water entering the aquifer by seawater intrusion by 81, 77and 72 %, for the three recharge cases, respectively. Meanwhile, the salinity in the Gaza aquifer will be decreased by 15, 32 and 26% for the three cases, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El siguiente trabajo, a partir de la identificación de los diferentes sujetos que participan en el medio ambiente donde se desenvuelve el restaurante El Molino, busca determinar cuáles podrían ser las estrategias de mercadeo más efectivas para que la imagen, concepto y servicio del restaurante, la marca en general, resulte lo más atractivas posibles para los segmentos objetivo de la empresa. Dadas las circunstancias de que es un negocio reciente, no existen datos históricos de la imagen que proyecta la marca hacia sus clientes, por lo tanto la información a partir de la cual se pretenden generar alternativas para que la marca influencie a los clientes de la manera deseada será conseguida a partir de una simulación que será obtenida de un modelo basado en agentes. Con esto lo que se busca es poder parametrizar en qué aspectos y de qué forma la empresa debe invertir para que la forma en que los clientes perciben la marca sea la deseada por el restaurante.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Con la creciente popularidad de las soluciones de IT como factor clave para aumentar la competitividad y la creación de valor para las empresas, la necesidad de invertir en proyectos de IT se incrementa considerablemente. La limitación de los recursos como un obstáculo para invertir ha obligado a las empresas a buscar metodologías para seleccionar y priorizar proyectos, asegurándose de que las decisiones que se toman son aquellas que van alineadas con las estrategias corporativas para asegurar la creación de valor y la maximización de los beneficios. Esta tesis proporciona los fundamentos para la implementación del Portafolio de dirección de Proyectos de IT (IT PPM) como una metodología eficaz para la gestión de proyectos basados en IT, y una herramienta para proporcionar criterios claros para los directores ejecutivos para la toma de decisiones. El documento proporciona la información acerca de cómo implementar el IT PPM en siete pasos, el análisis de los procesos y las funciones necesarias para su ejecución exitosa. Además, proporciona diferentes métodos y criterios para la selección y priorización de proyectos. Después de la parte teórica donde se describe el IT PPM, la tesis aporta un análisis del estudio de caso de una empresa farmacéutica. La empresa ya cuenta con un departamento de gestión de proyectos, pero se encontró la necesidad de implementar el IT PPM debido a su amplia cobertura de procesos End-to-End en Proyectos de IT, y la manera de asegurar la maximización de los beneficios. Con la investigación teórica y el análisis del estudio de caso, la tesis concluye con una definición práctica de un modelo aproximado IT PPM como una recomendación para su implementación en el Departamento de Gestión de Proyectos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El concepto de efectividad en Redes Inter-organizacionales se ha investigado poco a pesar de la gran importancia en el desarrollo y sostenibilidad de la red. Es muy importante entender este concepto ya que cuando hablamos de Red, nos referimos a un grupo de más de tres organizaciones que trabajan juntas para alcanzar un objetivo colectivo que beneficia a cada miembro de la red. Esto nos demuestra la importancia de evaluar y analizar este fenómeno “Red Inter-organizacional” de forma más detallada para poder analizar que estructura, formas de gobierno, relaciones entre los miembros y entre otros factores, influyen en la efectividad y perdurabilidad de la Red Inter-organizacional. Esta investigación se desarrolla con el fin de plantear una aproximación al concepto de medición de la efectividad en Redes Inter-organizacionales. El trabajo se centrara en la recopilación de información y en la investigación documental, la cual se realizará por fases para brindarle al lector una mayor claridad y entendimiento sobre qué es Red, Red Inter-Organizacional, Efectividad. Y para finalizar se estudiara Efectividad en una Red Inter-organizacional.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An operational dust forecasting model is developed by including the Met Office Hadley Centre climate model dust parameterization scheme, within a Met Office regional numerical weather prediction (NWP) model. The model includes parameterizations for dust uplift, dust transport, and dust deposition in six discrete size bins and provides diagnostics such as the aerosol optical depth. The results are compared against surface and satellite remote sensing measurements and against in situ measurements from the Facility for Atmospheric Airborne Measurements for a case study when a strong dust event was forecast. Comparisons are also performed against satellite and surface instrumentation for the entire month of August. The case study shows that this Saharan dust NWP model can provide very good guidance of dust events, as much as 42 h ahead. The analysis of monthly data suggests that the mean and variability in the dust model is also well represented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The prediction of climate variability and change requires the use of a range of simulation models. Multiple climate model simulations are needed to sample the inherent uncertainties in seasonal to centennial prediction. Because climate models are computationally expensive, there is a tradeoff between complexity, spatial resolution, simulation length, and ensemble size. The methods used to assess climate impacts are examined in the context of this trade-off. An emphasis on complexity allows simulation of coupled mechanisms, such as the carbon cycle and feedbacks between agricultural land management and climate. In addition to improving skill, greater spatial resolution increases relevance to regional planning. Greater ensemble size improves the sampling of probabilities. Research from major international projects is used to show the importance of synergistic research efforts. The primary climate impact examined is crop yield, although many of the issues discussed are relevant to hydrology and health modeling. Methods used to bridge the scale gap between climate and crop models are reviewed. Recent advances include large-area crop modeling, quantification of uncertainty in crop yield, and fully integrated crop–climate modeling. The implications of trends in computer power, including supercomputers, are also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Severe wind storms are one of the major natural hazards in the extratropics and inflict substantial economic damages and even casualties. Insured storm-related losses depend on (i) the frequency, nature and dynamics of storms, (ii) the vulnerability of the values at risk, (iii) the geographical distribution of these values, and (iv) the particular conditions of the risk transfer. It is thus of great importance to assess the impact of climate change on future storm losses. To this end, the current study employs—to our knowledge for the first time—a coupled approach, using output from high-resolution regional climate model scenarios for the European sector to drive an operational insurance loss model. An ensemble of coupled climate-damage scenarios is used to provide an estimate of the inherent uncertainties. Output of two state-of-the-art global climate models (HadAM3, ECHAM5) is used for present (1961–1990) and future climates (2071–2100, SRES A2 scenario). These serve as boundary data for two nested regional climate models with a sophisticated gust parametrizations (CLM, CHRM). For validation and calibration purposes, an additional simulation is undertaken with the CHRM driven by the ERA40 reanalysis. The operational insurance model (Swiss Re) uses a European-wide damage function, an average vulnerability curve for all risk types, and contains the actual value distribution of a complete European market portfolio. The coupling between climate and damage models is based on daily maxima of 10 m gust winds, and the strategy adopted consists of three main steps: (i) development and application of a pragmatic selection criterion to retrieve significant storm events, (ii) generation of a probabilistic event set using a Monte-Carlo approach in the hazard module of the insurance model, and (iii) calibration of the simulated annual expected losses with a historic loss data base. The climate models considered agree regarding an increase in the intensity of extreme storms in a band across central Europe (stretching from southern UK and northern France to Denmark, northern Germany into eastern Europe). This effect increases with event strength, and rare storms show the largest climate change sensitivity, but are also beset with the largest uncertainties. Wind gusts decrease over northern Scandinavia and Southern Europe. Highest intra-ensemble variability is simulated for Ireland, the UK, the Mediterranean, and parts of Eastern Europe. The resulting changes on European-wide losses over the 110-year period are positive for all layers and all model runs considered and amount to 44% (annual expected loss), 23% (10 years loss), 50% (30 years loss), and 104% (100 years loss). There is a disproportionate increase in losses for rare high-impact events. The changes result from increases in both severity and frequency of wind gusts. Considerable geographical variability of the expected losses exists, with Denmark and Germany experiencing the largest loss increases (116% and 114%, respectively). All countries considered except for Ireland (−22%) experience some loss increases. Some ramifications of these results for the socio-economic sector are discussed, and future avenues for research are highlighted. The technique introduced in this study and its application to realistic market portfolios offer exciting prospects for future research on the impact of climate change that is relevant for policy makers, scientists and economists.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The nature and magnitude of climatic variability during the period of middle Pliocene warmth (ca 3.29–2.97 Ma) is poorly understood. We present a suite of palaeoclimate modelling experiments incorporating an advanced atmospheric general circulation model (GCM), coupled to a Q-flux ocean model for 3.29, 3.12 and 2.97 Ma BP. Astronomical solutions for the periods in question were derived from the Berger and Loutre BL2 astronomical solution. Boundary conditions, excluding sea surface temperatures (SSTs) which were predicted by the slab-ocean model, were provided from the USGS PRISM2 2°×2° digital data set. The model results indicate that little annual variation (0.5°C) in SSTs, relative to a ‘control’ experiment, occurred during the middle Pliocene in response to the altered orbital configurations. Annual surface air temperatures also displayed little variation. Seasonally, surface air temperatures displayed a trend of cooler temperatures during December, January and February, and warmer temperatures during June, July and August. This pattern is consistent with altered seasonality resulting from the prescribed orbital configurations. Precipitation changes follow the seasonal trend observed for surface air temperature. Compared to present-day, surface wind strength and wind stress over the North Atlantic, North Pacific and Southern Ocean remained greater in each of the Pliocene experiments. This suggests that wind-driven gyral circulation may have been consistently greater during the middle Pliocene. The trend of climatic variability predicted by the GCM for the middle Pliocene accords with geological data. However, it is unclear if the model correctly simulates the magnitude of the variation. This uncertainty is derived from, (a) the relative insensitivity of the GCM to perturbation in the imposed boundary conditions, (b) a lack of detailed time series data concerning changes to terrestrial ice cover and greenhouse gas concentrations for the middle Pliocene and (c) difficulties in representing the effects of ‘climatic history’ in snap-shot GCM experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The triggering of convective orographic rainbands by small-scale topographic features is investigated through observations of a banded precipitation event over the Oregon Coastal Range and simulations using a cloud-resolving numerical model. A quasi-idealized simulation of the observed event reproduces the bands in the radar observations, indicating the model’s ability to capture the physics of the band-formation process. Additional idealized simulations reinforce that the bands are triggered by lee waves past small-scale topographic obstacles just upstream of the nominal leading edge of the orographic cloud. Whether a topographic obstacle in this region is able to trigger a strong rainband depends on the phase of its lee wave at cloud entry. Convective growth only occurs downstream of obstacles that give rise to lee-wave-induced displacements that create positive vertical velocity anomalies w_c and nearly zero buoyancy anomalies b_c as air parcels undergo saturation. This relationship is quantified through a simple analytic condition involving w_c, b_c, and the static stability N_m^2 of the cloud mass. Once convection is triggered, horizontal buoyancy gradients in the cross-flow direction generate circulations that align the bands parallel to the flow direction.