978 resultados para Aggregate Programming Spatial Computing Scafi Alchemist
Resumo:
The view of a 1870-1913 expanding European economy providing increasing welfare to everybody has been challenged by many, then and now. We focus on the amazing growth that was experienced, its diffusion and its sources, in the context of the permanent competition among European nation states. During 1870-193 the globalized European economy reached a silver age . GDP growth was quite rapid (2.15% per annum) and diffused all over Europe. Even discounting the high rates of population growth (1.06%), per capita growth was left at a respectable 1.08%. Income per capita was rising in every country, and the rates of improvement were quite similar. This was a major achievement after two generations of highly localized growth, both geographically and socially. Growth was based on the increased use of labour and capital, but a good part of growth (73 per cent for the weighted average of the best documented European countries) came out of total factor productivity efficiency gains resulting from not well specified ultimate sources of growth. This proportion suggests that the European economy was growing at full capacity at its production frontier. It would have been very difficult to improve its performance. Within Europe, convergence was limited, and it only was in motion after 1900. What happened was more the end of the era of big divergence rather than an era of convergence.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing froman offer set, have recently become popular in assortment optimization and revenue management.The dynamic program for these models is intractable and approximated by a deterministiclinear program called the CDLP which has an exponential number of columns. However, whenthe segment consideration sets overlap, the CDLP is difficult to solve. Column generationhas been proposed but finding an entering column has been shown to be NP-hard. In thispaper we propose a new approach called SDCP to solving CDLP based on segments and theirconsideration sets. SDCP is a relaxation of CDLP and hence forms a looser upper bound onthe dynamic program but coincides with CDLP for the case of non-overlapping segments. Ifthe number of elements in a consideration set for a segment is not very large (SDCP) can beapplied to any discrete-choice model of consumer behavior. We tighten the SDCP bound by(i) simulations, called the randomized concave programming (RCP) method, and (ii) by addingcuts to a recent compact formulation of the problem for a latent multinomial-choice model ofdemand (SBLP+). This latter approach turns out to be very effective, essentially obtainingCDLP value, and excellent revenue performance in simulations, even for overlapping segments.By formulating the problem as a separation problem, we give insight into why CDLP is easyfor the MNL with non-overlapping considerations sets and why generalizations of MNL posedifficulties. We perform numerical simulations to determine the revenue performance of all themethods on reference data sets in the literature.
Resumo:
I revisit the General Theory's discussion of the role of wages inemployment determination through the lens of the New Keynesianmodel. The analysis points to the key role played by the monetarypolicy rule in shaping the link between wages and employment, andin determining the welfare impact of enhanced wage flexibility. I showthat the latter is not always welfare improving.
Resumo:
This paper argues that in the presence of intersectoral input-output linkages, microeconomicidiosyncratic shocks may lead to aggregate fluctuations. In particular, itshows that, as the economy becomes more disaggregated, the rate at which aggregatevolatility decays is determined by the structure of the network capturing such linkages.Our main results provide a characterization of this relationship in terms of the importanceof different sectors as suppliers to their immediate customers as well as theirrole as indirect suppliers to chains of downstream sectors. Such higher-order interconnectionscapture the possibility of "cascade effects" whereby productivity shocks to asector propagate not only to its immediate downstream customers, but also indirectlyto the rest of the economy. Our results highlight that sizable aggregate volatility isobtained from sectoral idiosyncratic shocks only if there exists significant asymmetryin the roles that sectors play as suppliers to others, and that the "sparseness" of theinput-output matrix is unrelated to the nature of aggregate fluctuations.
Resumo:
The choice network revenue management model incorporates customer purchase behavioras a function of the offered products, and is the appropriate model for airline and hotel networkrevenue management, dynamic sales of bundles, and dynamic assortment optimization.The optimization problem is a stochastic dynamic program and is intractable. A certainty-equivalencerelaxation of the dynamic program, called the choice deterministic linear program(CDLP) is usually used to generate dyamic controls. Recently, a compact linear programmingformulation of this linear program was given for the multi-segment multinomial-logit (MNL)model of customer choice with non-overlapping consideration sets. Our objective is to obtaina tighter bound than this formulation while retaining the appealing properties of a compactlinear programming representation. To this end, it is natural to consider the affine relaxationof the dynamic program. We first show that the affine relaxation is NP-complete even for asingle-segment MNL model. Nevertheless, by analyzing the affine relaxation we derive a newcompact linear program that approximates the dynamic programming value function betterthan CDLP, provably between the CDLP value and the affine relaxation, and often comingclose to the latter in our numerical experiments. When the segment consideration sets overlap,we show that some strong equalities called product cuts developed for the CDLP remain validfor our new formulation. Finally we perform extensive numerical comparisons on the variousbounds to evaluate their performance.
Resumo:
In a weighted spatial network, as specified by an exchange matrix, the variances of the spatial values are inversely proportional to the size of the regions. Spatial values are no more exchangeable under independence, thus weakening the rationale for ordinary permutation and bootstrap tests of spatial autocorrelation. We propose an alternative permutation test for spatial autocorrelation, based upon exchangeable spatial modes, constructed as linear orthogonal combinations of spatial values. The coefficients obtain as eigenvectors of the standardised exchange matrix appearing in spectral clustering, and generalise to the weighted case the concept of spatial filtering for connectivity matrices. Also, two proposals aimed at transforming an acessibility matrix into a exchange matrix with with a priori fixed margins are presented. Two examples (inter-regional migratory flows and binary adjacency networks) illustrate the formalism, rooted in the theory of spectral decomposition for reversible Markov chains.
Efficiency and equilibrium with locally increasing aggregate returns due to demand complementarities
Resumo:
This paper analyzes the flow of intermediate inputs across sectors by adopting a network perspective on sectoral interactions. I apply these tools to show how fluctuationsin aggregate economic activity can be obtained from independent shocks to individualsectors. First, I characterize the network structure of input trade in the U.S. On thedemand side, a typical sector relies on a small number of key inputs and sectors arehomogeneous in this respect. However, in their role as input-suppliers sectors do differ:many specialized input suppliers coexist alongside general purpose sectors functioningas hubs to the economy. I then develop a model of intersectoral linkages that can reproduce these connectivity features. In a standard multisector setup, I use this modelto provide analytical expressions linking aggregate volatility to the network structureof input trade. I show that the presence of sectoral hubs - by coupling productiondecisions across sectors - leads to fluctuations in aggregates.
Resumo:
Background: Previous magnetic resonance imaging (MRI) studies in young patients with bipolar disorder indicated the presence of grey matter concentration changes as well as microstructural alterations in white matter in various neocortical areas and the corpus callosum. Whether these structural changes are also present in elderly patients with bipolar disorder with long-lasting clinical evolution remains unclear. Methods: We performed a prospective MRI study of consecutive elderly, euthymic patients with bipolar disorder and healthy, elderly controls. We conducted a voxel-based morphometry (VBM) analysis and a tract-based spatial statistics (TBSS) analysis to assess fractional anisotropy and longitudinal, radial and mean diffusivity derived by diffusion tensor imaging (DTI). Results: We included 19 patients with bipolar disorder and 47 controls in our study. Fractional anisotropy was the most sensitive DTI marker and decreased significantly in the ventral part of the corpus callosum in patients with bipolar disorder. Longitudinal, radial and mean diffusivity showed no significant between-group differences. Grey matter concentration was reduced in patients with bipolar disorder in the right anterior insula, head of the caudate nucleus, nucleus accumbens, ventral putamen and frontal orbital cortex. Conversely, there was no grey matter concentration or fractional anisotropy increase in any brain region in patients with bipolar disorder compared with controls. Limitations: The major limitation of our study is the small number of patients with bipolar disorder. Conclusion: Our data document the concomitant presence of grey matter concentration decreases in the anterior limbic areas and the reduced fibre tract coherence in the corpus callosum of elderly patients with long-lasting bipolar disorder.
Resumo:
Serum-free aggregating cell cultures of fetal rat telencephalon treated with low doses (0.5 nM) of epidermal growth factor (EGF) showed a small, transient increase in DNA synthesis but no significant changes in total DNA and protein content. By contrast, treatment with high doses (13 nM) of EGF caused a marked stimulation of DNA synthesis as well as a net increase in DNA and protein content. The expression of the astrocyte-specific enzyme, glutamine synthetase, was greatly enhanced both at low and at high EGF concentrations. These results suggest that at low concentration EGF stimulates exclusively the differentiation of astrocytes, whereas at high concentration, EGF has also a mitogenic effect. Nonproliferating astrocytes in cultures treated with 0.4 microM 1-beta-D-arabinofuranosyl-cytosine were refractory to EGF treatment, indicating that their responsiveness to EGF is cell cycle-dependent. Binding studies using a crude membrane fraction of 5-day cultures showed a homogeneous population of EGF binding sites (Kd approximately equal to 2.6 nM). Specific EGF binding sites were found also in non-proliferating (and nonresponsive) cultures, although they showed slightly reduced affinity and binding capacity. This finding suggests that the cell cycle-dependent control of astroglial responsiveness to EGF does not occur at the receptor level. However, it was found that the specific EGF binding sites disappear with progressive cellular differentiation.
Resumo:
Aim This study used data from temperate forest communities to assess: (1) five different stepwise selection methods with generalized additive models, (2) the effect of weighting absences to ensure a prevalence of 0.5, (3) the effect of limiting absences beyond the environmental envelope defined by presences, (4) four different methods for incorporating spatial autocorrelation, and (5) the effect of integrating an interaction factor defined by a regression tree on the residuals of an initial environmental model. Location State of Vaud, western Switzerland. Methods Generalized additive models (GAMs) were fitted using the grasp package (generalized regression analysis and spatial predictions, http://www.cscf.ch/grasp). Results Model selection based on cross-validation appeared to be the best compromise between model stability and performance (parsimony) among the five methods tested. Weighting absences returned models that perform better than models fitted with the original sample prevalence. This appeared to be mainly due to the impact of very low prevalence values on evaluation statistics. Removing zeroes beyond the range of presences on main environmental gradients changed the set of selected predictors, and potentially their response curve shape. Moreover, removing zeroes slightly improved model performance and stability when compared with the baseline model on the same data set. Incorporating a spatial trend predictor improved model performance and stability significantly. Even better models were obtained when including local spatial autocorrelation. A novel approach to include interactions proved to be an efficient way to account for interactions between all predictors at once. Main conclusions Models and spatial predictions of 18 forest communities were significantly improved by using either: (1) cross-validation as a model selection method, (2) weighted absences, (3) limited absences, (4) predictors accounting for spatial autocorrelation, or (5) a factor variable accounting for interactions between all predictors. The final choice of model strategy should depend on the nature of the available data and the specific study aims. Statistical evaluation is useful in searching for the best modelling practice. However, one should not neglect to consider the shapes and interpretability of response curves, as well as the resulting spatial predictions in the final assessment.
Resumo:
We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.