992 resultados para Mapping Problem
Resumo:
We consider, both theoretically and empirically, how different organization modes are aligned to govern the efficient solving of technological problems. The data set is a sample from the Chinese consumer electronics industry. Following mainly the problem solving perspective (PSP) within the knowledge based view (KBV), we develop and test several PSP and KBV hypotheses, in conjunction with competing transaction cost economics (TCE) alternatives, in an examination of the determinants of the R&D organization mode. The results show that a firm’s existing knowledge base is the single most important explanatory variable. Problem complexity and decomposability are also found to be important, consistent with the theoretical predictions of the PSP, but it is suggested that these two dimensions need to be treated as separate variables. TCE hypotheses also receive some support, but the estimation results seem more supportive of the PSP and the KBV than the TCE.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
This study presents a first attempt to extend the “Multi-scale integrated analysis of societal and ecosystem metabolism (MuSIASEM)” approach to a spatial dimension using GIS techniques in the Metropolitan area of Barcelona. We use a combination of census and commercial databases along with a detailed land cover map to create a layer of Common Geographic Units that we populate with the local values of human time spent in different activities according to MuSIASEM hierarchical typology. In this way, we mapped the hours of available human time, in regards to the working hours spent in different locations, putting in evidence the gradients in spatial density between the residential location of workers (generating the work supply) and the places where the working hours are actually taking place. We found a strong three-modal pattern of clumps of areas with different combinations of values of time spent on household activities and on paid work. We also measured and mapped spatial segregation between these two activities and put forward the conjecture that this segregation increases with higher energy throughput, as the size of the functional units must be able to cope with the flow of exosomatic energy. Finally, we discuss the effectiveness of the approach by comparing our geographic representation of exosomatic throughput to the one issued from conventional methods.
Resumo:
We prove existence theorems for the Dirichlet problem for hypersurfaces of constant special Lagrangian curvature in Hadamard manifolds. The first results are obtained using the continuity method and approximation and then refined using two iterations of the Perron method. The a-priori estimates used in the continuity method are valid in any ambient manifold.
Resumo:
The molecular karyotypes for 20 reference strais of species complexes of Leishmania were determined by contour-clamped homogeneous eletric field (CHEF) electrosphoresis. Determination of number/position of chromosome-sized bands and chromosomal DNA locations of house-keeping genes were the two criteria used for differentiating and classifying the Leishmania species. We have established two gel running conditions of optimal separation of chromosomes, wich resolved DNA molecules as large as 2,500 kilobase pairs (kb). Chromosomes were polymorphic in number (22-30) and size (200-2,500 kb) of bands among members of five complexes of Leishmania. Although each stock had a distinct karyotype, in general the differences found between strains and/or species within each complex were not clear enough for parasite identification. However, each group showed a specific number of size-concordant DNA molecules, wich allowed distinction among the Leishmania complex parasites. Clear differences between the Old and New world groups of parasites or among some New World Leishmania species were also apparent in relation to the chromosome locations of beta-tubulin genes. Based on these results as well as data from other published studies the potencial of using DNA karyotype for identifying and classifying leishmanial field isolates is discussed.
Resumo:
Debris flow susceptibility mapping at a regional scale has been the subject of various studies. The complexity of the phenomenon and the variability of local controlling factors limit the use of process-based models for a first assessment. GISbased approaches associating an automatic detection of the source areas and a simple assessment of the debris flow spreading may provide a substantial basis for a preliminary susceptibility assessment at the regional scale. The use of a digital elevation model, with a 10 m resolution, for the Canton de Vaud territory (Switzerland), a lithological map and a land use map, has allowed automatic identification of the potential source areas. The spreading estimates are based on basic probabilistic and energy calculations that allow to define the maximal runout distance of a debris flow.
Resumo:
The aim of this study is to perform a thorough comparison of quantitative susceptibility mapping (QSM) techniques and their dependence on the assumptions made. The compared methodologies were: two iterative single orientation methodologies minimizing the l2, l1TV norm of the prior knowledge of the edges of the object, one over-determined multiple orientation method (COSMOS) and anewly proposed modulated closed-form solution (MCF). The performance of these methods was compared using a numerical phantom and in-vivo high resolution (0.65mm isotropic) brain data acquired at 7T using a new coil combination method. For all QSM methods, the relevant regularization and prior-knowledge parameters were systematically changed in order to evaluate the optimal reconstruction in the presence and absence of a ground truth. Additionally, the QSM contrast was compared to conventional gradient recalled echo (GRE) magnitude and R2* maps obtained from the same dataset. The QSM reconstruction results of the single orientation methods show comparable performance. The MCF method has the highest correlation (corrMCF=0.95, r(2)MCF =0.97) with the state of the art method (COSMOS) with additional advantage of extreme fast computation time. The l-curve method gave the visually most satisfactory balance between reduction of streaking artifacts and over-regularization with the latter being overemphasized when the using the COSMOS susceptibility maps as ground-truth. R2* and susceptibility maps, when calculated from the same datasets, although based on distinct features of the data, have a comparable ability to distinguish deep gray matter structures.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
We establish existence and non-existence results to the Brezis-Nirenberg type problem involving the square root of the Laplacian in a bounded domain with zero Dirichlet boundary condition.
Resumo:
A family of nonempty closed convex sets is built by using the data of the Generalized Nash equilibrium problem (GNEP). The sets are selected iteratively such that the intersection of the selected sets contains solutions of the GNEP. The algorithm introduced by Iusem-Sosa (2003) is adapted to obtain solutions of the GNEP. Finally some numerical experiments are given to illustrate the numerical behavior of the algorithm.
Resumo:
The division problem consists of allocating a given amount of an homogeneous and perfectly divisible good among a group of agents with single-peaked preferences on the set of their potential shares. A rule proposes a vector of shares for each division problem. The literature has implicitly assumed that agents will find acceptable any share they are assigned to. In this paper we consider the division problem when agents' participation is voluntary. Each agent has an idiosyncratic interval of acceptable shares where his preferences are single-peaked. A rule has to propose to each agent either to not participate or an acceptable share because otherwise he would opt out and this would require to reassign some of the remaining agents' shares. We study a subclass of efficient and consistent rules and characterize extensions of the uniform rule that deal explicitly with agents' voluntary participation.