924 resultados para Distributed process model
Resumo:
La globalización de mercados ha generado una serie de cambios en la estructura del comercio internacional, como el surgimiento de áreas de libre comercio, que son el resultado de las integraciones económicas, las cuales han facilitado los flujos de capital, recursos y personas. La internacionalización no solamente se ha convertido en una estrategia para aprovechar las oportunidades que se dan en los mercados internacionales, sino también en un medio de diversificación del riesgo para reducir la dependencia del mercado doméstico. Sin embargo, para emprender un proceso de internacionalización, es necesario conocer muy bien el contexto dentro del cual se desarrollan los clientes, puesto que la falta de conocimiento del entorno puede perjudicar la salud financiera de la empresa. De ahí la importancia de utilizar una definición de comunidad y unas estrategias comunitarias con las cuales se identifiquen las necesidades, objetivos e intereses de la comunidad, para establecer un relación a largo plazo que procure el desarrollo de ambas partes. La relación estratégica comunitaria y el marketing impactan positivamente la salud financiera de la empresa, en la medida en que este desarrollo mutuo, tanto de la comunidad como de la empresa, no solamente incrementa el interés y el compromiso por seguir interactuando; también crea vínculos afectivos entre ambas partes, lo cual consolida aún más la perdurabilidad de la relación, logrando así una fidelización de los clientes y por ende aumentando la rentabilidad de la empresa.
Resumo:
In most commercially available predictive control packages, there is a separation between economic optimisation and predictive control, although both algorithms may be part of the same software system. This method is compared in this article with two alternative approaches where the economic objectives are directly included in the predictive control algorithm. Simulations are carried out using the Tennessee Eastman process model.
Resumo:
We present a comparative analysis of projected impacts of climate change on river runoff from two types of distributed hydrological model, a global hydrological model (GHM) and catchment-scale hydrological models (CHM). Analyses are conducted for six catchments that are global in coverage and feature strong contrasts in spatial scale as well as climatic and development conditions. These include the Liard (Canada), Mekong (SE Asia), Okavango (SW Africa), Rio Grande (Brazil), Xiangu (China) and Harper's Brook (UK). A single GHM (Mac-PDM.09) is applied to all catchments whilst different CHMs are applied for each catchment. The CHMs typically simulate water resources impacts based on a more explicit representation of catchment water resources than that available from the GHM, and the CHMs include river routing. Simulations of average annual runoff, mean monthly runoff and high (Q5) and low (Q95) monthly runoff under baseline (1961-1990) and climate change scenarios are presented. We compare the simulated runoff response of each hydrological model to (1) prescribed increases in global mean temperature from the HadCM3 climate model and (2)a prescribed increase in global-mean temperature of 2oC for seven GCMs to explore response to climate model and structural uncertainty. We find that differences in projected changes of mean annual runoff between the two types of hydrological model can be substantial for a given GCM, and they are generally larger for indicators of high and low flow. However, they are relatively small in comparison to the range of projections across the seven GCMs. Hence, for the six catchments and seven GCMs we considered, climate model structural uncertainty is greater than the uncertainty associated with the type of hydrological model applied. Moreover, shifts in the seasonal cycle of runoff with climate change are presented similarly by both hydrological models, although for some catchments the monthly timing of high and low flows differs.This implies that for studies that seek to quantify and assess the role of climate model uncertainty on catchment-scale runoff, it may be equally as feasible to apply a GHM as it is to apply a CHM, especially when climate modelling uncertainty across the range of available GCMs is as large as it currently is. Whilst the GHM is able to represent the broad climate change signal that is represented by the CHMs, we find, however, that for some catchments there are differences between GHMs and CHMs in mean annual runoff due to differences in potential evaporation estimation methods, in the representation of the seasonality of runoff, and in the magnitude of changes in extreme monthly runoff, all of which have implications for future water management issues.
Resumo:
Current methods for estimating vegetation parameters are generally sub-optimal in the way they exploit information and do not generally consider uncertainties. We look forward to a future where operational dataassimilation schemes improve estimates by tracking land surface processes and exploiting multiple types of observations. Dataassimilation schemes seek to combine observations and models in a statistically optimal way taking into account uncertainty in both, but have not yet been much exploited in this area. The EO-LDAS scheme and prototype, developed under ESA funding, is designed to exploit the anticipated wealth of data that will be available under GMES missions, such as the Sentinel family of satellites, to provide improved mapping of land surface biophysical parameters. This paper describes the EO-LDAS implementation, and explores some of its core functionality. EO-LDAS is a weak constraint variational dataassimilationsystem. The prototype provides a mechanism for constraint based on a prior estimate of the state vector, a linear dynamic model, and EarthObservationdata (top-of-canopy reflectance here). The observation operator is a non-linear optical radiative transfer model for a vegetation canopy with a soil lower boundary, operating over the range 400 to 2500 nm. Adjoint codes for all model and operator components are provided in the prototype by automatic differentiation of the computer codes. In this paper, EO-LDAS is applied to the problem of daily estimation of six of the parameters controlling the radiative transfer operator over the course of a year (> 2000 state vector elements). Zero and first order process model constraints are implemented and explored as the dynamic model. The assimilation estimates all state vector elements simultaneously. This is performed in the context of a typical Sentinel-2 MSI operating scenario, using synthetic MSI observations simulated with the observation operator, with uncertainties typical of those achieved by optical sensors supposed for the data. The experiments consider a baseline state vector estimation case where dynamic constraints are applied, and assess the impact of dynamic constraints on the a posteriori uncertainties. The results demonstrate that reductions in uncertainty by a factor of up to two might be obtained by applying the sorts of dynamic constraints used here. The hyperparameter (dynamic model uncertainty) required to control the assimilation are estimated by a cross-validation exercise. The result of the assimilation is seen to be robust to missing observations with quite large data gaps.
Resumo:
Now, more than ever, higher education institutions are reflecting on the need for flexible leadership models to help adapt to the fast changing academic environment. Rapid shifts in the sector are contributing to a kaleidoscopic ‘supercomplexity’ of challenges, structures, processes and value frameworks for academics who lead and for those who are led. How are institutions’ leadership structures and roles developing in response to these changes? And how do these responses affect academic staff in relation to their identity, status and career trajectory? This paper reports on a Leadership Foundation funded research project exploring the ways in which one UK institution has implemented a new ‘distributed’ leadership model. Crucially, the project examines the impact of the model on both those who are leaders and those being led.
Resumo:
The susceptibility of a catchment to flooding is affected by its soil moisture prior to an extreme rainfall event. While soil moisture is routinely observed by satellite instruments, results from previous work on the assimilation of remotely sensed soil moisture into hydrologic models have been mixed. This may have been due in part to the low spatial resolution of the observations used. In this study, the remote sensing aspects of a project attempting to improve flow predictions from a distributed hydrologic model by assimilating soil moisture measurements are described. Advanced Synthetic Aperture Radar (ASAR) Wide Swath data were used to measure soil moisture as, unlike low resolution microwave data, they have sufficient resolution to allow soil moisture variations due to local topography to be detected, which may help to take into account the spatial heterogeneity of hydrological processes. Surface soil moisture content (SSMC) was measured over the catchments of the Severn and Avon rivers in the South West UK. To reduce the influence of vegetation, measurements were made only over homogeneous pixels of improved grassland determined from a land cover map. Radar backscatter was corrected for terrain variations and normalized to a common incidence angle. SSMC was calculated using change detection. To search for evidence of a topographic signal, the mean SSMC from improved grassland pixels on low slopes near rivers was compared to that on higher slopes. When the mean SSMC on low slopes was 30–90%, the higher slopes were slightly drier than the low slopes. The effect was reversed for lower SSMC values. It was also more pronounced during a drying event. These findings contribute to the scant information in the literature on the use of high resolution SAR soil moisture measurement to improve hydrologic models.
Resumo:
An analysis of diabatic heating and moistening processes from 12-36 hour lead time forecasts from 12 Global Circulation Models are presented as part of the "Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)" project. A lead time of 12-36 hours is chosen to constrain the large scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up for the models as they adjust to being driven from the YOTC analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests that the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large scale dynamics is reasonably constrained, moistening and heating profiles have large inter-model spread. In particular, there are large spreads in convective heating and moistening at mid-levels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behaviour shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. The wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. In addition, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.
Resumo:
Background: Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Results: Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular sound change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. Conclusions: We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group.
Resumo:
Research on invention has focused on business invention and little work has been conducted on the process and capability required for the individual inventor or the capabilities required for an advice to be considered an invention. This paper synthesises the results of an empirical survey of ten inventor case studies with current research on invention and recent capability affordance research to develop an integrated capability process model of human capabilities for invention and specific capabilities of an invented device. We identify eight necessary human effectivities required for individual invention capability and six functional key activities using these effectivities, to deliver the functional capability of invention. We also identified key differences between invention and general problem solving processes. Results suggest that inventive step capability relies on a unique application of principles that relate to a new combination of affordance chain with a new mechanism and or space time (affordance) path representing the novel way the device works, in conjunction with defined critical affordance operating factors that are the subject of the patent claims.
Resumo:
We utilized an ecosystem process model (SIPNET, simplified photosynthesis and evapotranspiration model) to estimate carbon fluxes of gross primary productivity and total ecosystem respiration of a high-elevation coniferous forest. The data assimilation routine incorporated aggregated twice-daily measurements of the net ecosystem exchange of CO2 (NEE) and satellite-based reflectance measurements of the fraction of absorbed photosynthetically active radiation (fAPAR) on an eight-day timescale. From these data we conducted a data assimilation experiment with fifteen different combinations of available data using twice-daily NEE, aggregated annual NEE, eight-day f AP AR, and average annual fAPAR. Model parameters were conditioned on three years of NEE and fAPAR data and results were evaluated to determine the information content from the different combinations of data streams. Across the data assimilation experiments conducted, model selection metrics such as the Bayesian Information Criterion and Deviance Information Criterion obtained minimum values when assimilating average annual fAPAR and twice-daily NEE data. Application of wavelet coherence analyses showed higher correlations between measured and modeled fAPAR on longer timescales ranging from 9 to 12 months. There were strong correlations between measured and modeled NEE (R2, coefficient of determination, 0.86), but correlations between measured and modeled eight-day fAPAR were quite poor (R2 = −0.94). We conclude that this inability to determine fAPAR on eight-day timescale would improve with the considerations of the radiative transfer through the plant canopy. Modeled fluxes when assimilating average annual fAPAR and annual NEE were comparable to corresponding results when assimilating twice-daily NEE, albeit at a greater uncertainty. Our results support the conclusion that for this coniferous forest twice-daily NEE data are a critical measurement stream for the data assimilation. The results from this modeling exercise indicate that for this coniferous forest, average annuals for satellite-based fAPAR measurements paired with annual NEE estimates may provide spatial detail to components of ecosystem carbon fluxes in proximity of eddy covariance towers. Inclusion of other independent data streams in the assimilation will also reduce uncertainty on modeled values.
Resumo:
Observed and predicted changes in the strength of the westerly winds blowing over the Southern Ocean have motivated a number of studies of the response of the Antarctic Circumpolar Current and Southern Ocean Meridional Overturning Circulation (MOC) to wind perturbations and led to the discovery of the``eddy-compensation" regime, wherein the MOC becomes insensitive to wind changes. In addition to the MOC, tracer transport also depends on mixing processes. Here we show, in a high-resolution process model, that isopycnal mixing by mesoscale eddies is strongly dependent on the wind strength. This dependence can be explained by mixing-length theory and is driven by increases in eddy kinetic energy; the mixing length does not change strongly in our simulation. Simulation of a passive ventilation tracer (analogous to CFCs or anthropogenic CO$_2$) demonstrates that variations in tracer uptake across experiments are dominated by changes in isopycnal mixing, rather than changes in the MOC. We argue that, to properly understand tracer uptake under different wind-forcing scenarios, the sensitivity of isopycnal mixing to winds must be accounted for.
Resumo:
In this work we construct the stationary measure of the N species totally asymmetric simple exclusion process in a matrix product formulation. We make the connection between the matrix product formulation and the queueing theory picture of Ferrari and Martin. In particular, in the standard representation, the matrices act on the space of queue lengths. For N > 2 the matrices in fact become tensor products of elements of quadratic algebras. This enables us to give a purely algebraic proof of the stationary measure which we present for N=3.
Resumo:
The aim of the project is to develop a tool for a management system at the advertising agency ofConfetti. The company has during a longer period of time felt a need of introducing a quality and environmentalsystem but have a lack in resources of knowledge, time and money. Their goal is to have aconcrete documentation which describes their quality policy and the function of the activity.The work has been performed by mapping and analyzing the processes of the company according tothe process model with focus on one pilot process. New routines have been developed and documentedin a management system.Confetti would like their management system to fulfil the ISO-standard so that they eventually inthe future can be certified. Therefore a literature study has been done according to management systemsand standards. The project has been directed to fulfil the ISO-standards demands for a qualitymanagement system. It has resulted in the creation of a comprehensible and userfriendly managementsystem which includes a routine model and new routines for the pilot process of PDF-handling. A oneyear-plan has been created to facilitate for the continuos development of the quality management systemat Confetti. The new quality implementations demands investment, why a budget has been calculatedshowing profitability.Confetti has to continue the development of the management system very soon. They ought to createan analyzis of the companys current situation and choose the processes that they themeselves considermost important or easiest to improve. A management system will not operate by it self, it needsmaintenance and development.
Resumo:
Forskningen visar att förändringar av informationsteknologin och en ökande anskaffning av nya programvaror har lett till underliggande problem som kan drabba heterogena programvarulicensmiljöer och stora organisationer. Underliggande problem i den stora kontexten är mjukvaruhantering. Licenshantering av programvaror är just en förgrening av det stora problemet. Stora organisationer som en kommunal verksamhet är drabbad av det här underliggande problemet på grund av komplexitet hos organisationens miljö. Att tillämpa förändringar i området programvarulicens är omöjligt utan att göra förändringar i hela den organisationsprocess som följer med det. Fallstudiens uppdrag är ett nytt omfattande område kring licenshantering av programvaror som kan vara väldigt lärorikt och en bra erfarenhet att ta del av. Uppsatsen beskriver hur en kommunal verksamhets licenshantering av programvaror ser ut och de problem som finns med den nuvarande licenshanteringsprocessen. Förarbetet med en litteraturstudie tillsammans med datagenereringsmetoderna intervjuer, dokumentstudier och observationer används för att studera fallet på djupet. Målet är att kunna ta fram de nuvarande problem som finns, analysera dem och ge rekommendation för åtgärder som det studerade fallobjektet, Falu Kommuns IT-kontor, kan använda. En rekommendation för en tydlig licenshanteringsprocessmodell anses vara ett bra akademiskt bidrag eftersom problemet med licenshanteringen av programvaror är ett generellt problem. Uppsatsens resultat är en processmodell om licenshantering av programvaror för organisationer med IT-tjänstkunder. Det är en generisk lösning som skulle kunna användas av andra kommunverksamheter och liknande organisationer.
Resumo:
It is an everyday experience to realize that things do not turn out the as expected. But what if you realize that everything you have so far experienced as reality is illusion? This article is about former members of the Jehovah’s Witnesses who have had doubts about what they previously believed to be the Truth. The article also treats the exit process, from being a Jehovah’s Witness to becoming an ex-Jehovah’s Witness. The data consists of twenty qualitative interviews with ten Jehovah’s Witnesses and twenty qualitative interviews with ten former Jehovah’s Witnesses. The data also include a diary written during four years preceding an exit from the organization. The analysis was made through thematic concentration. Ontologically the analysis and the article are based on a constructionist view though it is mainly empirical with no further theoretical assessment. However, to be able to understand the results a contextual frame is sketched with two factors affecting members who make an exit. First there are tying factors that bind the person closer to the organization; these are closeness and friendship and confirmation. A secluding factor is something that secludes the member from the outside society; these factors are the work situation and »closed doors«. With high values on these factors the exit process will be more arduous. The results are presented through a process model in which different phases or steps in the exit process are described. The following steps in the process are: (1) different levels of doubts; (2) trying out doubts; (3) turning points; (4) different decisions; (5) different steps in execution; (6) floating; (7) relative neutrality. The process is defined as an altogether ambivalent and emotionally tough experience, but other parts of life may be affected as well, such as employment, social life, family life and career.