939 resultados para niche partitioning


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Competition is an immensely important area of study in economic theory, business and strategy. It is known to be vital in meeting consumers’ growing expectations, stimulating increase in the size of the market, pushing innovation, reducing cost and consequently generating better value for end users, among other things. Having said that, it is important to recognize that supply chains, as we know it, has changed the way companies deal with each other both in confrontational or conciliatory terms. As such, with the rise of global markets and outsourcing destinations, increased technological development in transportation, communication and telecommunications has meant that geographical barriers of distance with regards to competition are a thing of the past in an increasingly flat world. Even though the dominant articulation of competition within management and business literature rests mostly within economic competition theory, this thesis draws attention to the implicit shift in the recognition of other forms of competition in today’s business environment, especially those involving supply chain structures. Thus, there is popular agreement within a broad business arena that competition between companies is set to take place along their supply chains. Hence, management’s attention has been focused on how supply chains could become more aggressive making each firm in its supply chain more efficient. However, there is much disagreement on the mechanism through which such competition pitching supply chain against supply chain will take place. The purpose of this thesis therefore, is to develop and conceptualize the notion of supply chain vs. supply chain competition, within the discipline of supply chain management. The thesis proposes that competition between supply chains may be carried forward via the use of competition theories that emphasize interaction and dimensionality, hence, encountering friction from a number of sources in their search for critical resources and services. The thesis demonstrates how supply chain vs. supply chain competition may be carried out theoretically, using generated data for illustration, and practically using logistics centers as a way to provide a link between theory and corresponding practice of this evolving competition mode. The thesis concludes that supply chain vs. supply chain competition, no matter the conceptualization taken, is complex, novel and can be very easily distorted and abused. It therefore calls for the joint development of regulatory measures by practitioners and policymakers alike, to guide this developing mode of competition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clustering is a process of partitioning a given set of patterns into meaningful groups. The clustering process can be viewed as consisting of the following three phases: (i) feature selection phase, (ii) classification phase, and (iii) description generation phase. Conventional clustering algorithms implicitly use knowledge about the clustering environment to a large extent in the feature selection phase. This reduces the need for the environmental knowledge in the remaining two phases, permitting the usage of simple numerical measure of similarity in the classification phase. Conceptual clustering algorithms proposed by Michalski and Stepp [IEEE Trans. PAMI, PAMI-5, 396–410 (1983)] and Stepp and Michalski [Artif. Intell., pp. 43–69 (1986)] make use of the knowledge about the clustering environment in the form of a set of predefined concepts to compute the conceptual cohesiveness during the classification phase. Michalski and Stepp [IEEE Trans. PAMI, PAMI-5, 396–410 (1983)] have argued that the results obtained with the conceptual clustering algorithms are superior to conventional methods of numerical classification. However, this claim was not supported by the experimental results obtained by Dale [IEEE Trans. PAMI, PAMI-7, 241–244 (1985)]. In this paper a theoretical framework, based on an intuitively appealing set of axioms, is developed to characterize the equivalence between the conceptual clustering and conventional clustering. In other words, it is shown that any classification obtained using conceptual clustering can also be obtained using conventional clustering and vice versa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Before the spread of extensive settled cultivation, the Indian subcontinent would have been inhabited by territorial hunter–gatherers and shifting cultivators with cultural traditions of prudent resource use. The disruption of closed material cycles by export of agricultural produce to centres of non-agricultural population would have weakened these traditions. Indeed, the fire-based sacrificial ritual and extensive agricultural settlements might have catalysed the destruction of forests and wildlife and the suppression of tribal peoples during the agricultural colonization of the Gangetic plains. Buddhism, Jainism and later the Hindu sects may have been responses to the need for a reassertion of ecological prudence once the more fertile lands were brought under cultivation. British rule radically changed the focus of the country's resource use pattern from production of a variety of biological resources for local consumption to the production of a few commodities largely for export. The resulting ecological squeeze was accompanied by disastrous famines and epidemics between the 1860s and the 1920s. The counterflows to tracts of intensive agriculture have reduced such disasters since independence. However, these are quite inadequate to balance the state-subsidized outflows of resources from rural hinterlands. These imbalances have triggered serious environmental degradation and tremendous overcrowding of the niche of agricultural labour and marginal cultivator all over the country.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The K-means algorithm for clustering is very much dependent on the initial seed values. We use a genetic algorithm to find a near-optimal partitioning of the given data set by selecting proper initial seed values in the K-means algorithm. Results obtained are very encouraging and in most of the cases, on data sets having well separated clusters, the proposed scheme reached a global minimum.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cylindrical specimens of commercial pure titanium have been compressed at strain rates in the range of 0.1 to 100 s-1 and temperatures in the range of 25-degrees-C to 400-degrees-C. At strain rates of 10 and 100 s-1, the specimens exhibited adiabatic shear bands. At lower strain rates, the material deformed in an inhomogeneous fashion. These material-related instabilities are examined in the light of the ''phenomenological model'' and the ''dynamic materials mode.'' It is found that the regime of adiabatic shear band formation is predicted by the phenomenological model, while the dynamic materials model is able to predict the inhomogeneous deformation zone. The criterion based on power partitioning is competent to predict the variations within the inhomogeneous deformation zone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Periglacial processes act on cold, non-glacial regions where the landscape deveploment is mainly controlled by frost activity. Circa 25 percent of Earth's surface can be considered as periglacial. Geographical Information System combined with advanced statistical modeling methods, provides an efficient tool and new theoretical perspective for study of cold environments. The aim of this study was to: 1) model and predict the abundance of periglacial phenomena in subarctic environment with statistical modeling, 2) investigate the most import factors affecting the occurence of these phenomena with hierarchical partitioning, 3) compare two widely used statistical modeling methods: Generalized Linear Models and Generalized Additive Models, 4) study modeling resolution's effect on prediction and 5) study how spatially continous prediction can be obtained from point data. The observational data of this study consist of 369 points that were collected during the summers of 2009 and 2010 at the study area in Kilpisjärvi northern Lapland. The periglacial phenomena of interest were cryoturbations, slope processes, weathering, deflation, nivation and fluvial processes. The features were modeled using Generalized Linear Models (GLM) and Generalized Additive Models (GAM) based on Poisson-errors. The abundance of periglacial features were predicted based on these models to a spatial grid with a resolution of one hectare. The most important environmental factors were examined with hierarchical partitioning. The effect of modeling resolution was investigated with in a small independent study area with a spatial resolution of 0,01 hectare. The models explained 45-70 % of the occurence of periglacial phenomena. When spatial variables were added to the models the amount of explained deviance was considerably higher, which signalled a geographical trend structure. The ability of the models to predict periglacial phenomena were assessed with independent evaluation data. Spearman's correlation varied 0,258 - 0,754 between the observed and predicted values. Based on explained deviance, and the results of hierarchical partitioning, the most important environmental variables were mean altitude, vegetation and mean slope angle. The effect of modeling resolution was clear, too coarse resolution caused a loss of information, while finer resolution brought out more localized variation. The models ability to explain and predict periglacial phenomena in the study area were mostly good and moderate respectively. Differences between modeling methods were small, although the explained deviance was higher with GLM-models than GAMs. In turn, GAMs produced more realistic spatial predictions. The single most important environmental variable controlling the occurence of periglacial phenomena was mean altitude, which had strong correlations with many other explanatory variables. The ongoing global warming will have great impact especially in cold environments on high latitudes, and for this reason, an important research topic in the near future will be the response of periglacial environments to a warming climate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clustered VLIW architectures solve the scalability problem associated with flat VLIW architectures by partitioning the register file and connecting only a subset of the functional units to a register file. However, inter-cluster communication in clustered architectures leads to increased leakage in functional components and a high number of register accesses. In this paper, we propose compiler scheduling algorithms targeting two previously ignored power-hungry components in clustered VLIW architectures, viz., instruction decoder and register file. We consider a split decoder design and propose a new energy-aware instruction scheduling algorithm that provides 14.5% and 17.3% benefit in the decoder power consumption on an average over a purely hardware based scheme in the context of 2-clustered and 4-clustered VLIW machines. In the case of register files, we propose two new scheduling algorithms that exploit limited register snooping capability to reduce extra register file accesses. The proposed algorithms reduce register file power consumption on an average by 6.85% and 11.90% (10.39% and 17.78%), respectively, along with performance improvement of 4.81% and 5.34% (9.39% and 11.16%) over a traditional greedy algorithm for 2-clustered (4-clustered) VLIW machine. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several researchers have looked into various issues related to automatic parallelization of sequential programs for multicomputers. But there is a need for a coherent framework which encompasses all these issues. In this paper we present a such a framework which takes best advantage of the multicomputer architecture. We resort to tiling transformation for iteration space partitioning and propose a scheme of automatic data partitioning and dynamic data distribution. We have tried a simple implementation of our scheme on a transputer based multicomputer [1] and the results are encouraging.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The reactions of p-nitrophenyl alkanoate esters with dialkylaminopyridine (DAAP) and its related mono- and di-anionic water-soluble derivatives have been studied separately in three different microemulsion (ME) media. These were (a) oil-in-water ME (O/W), (b) water-in-oil ME (W/O) and (c) a bicontinuous ME, where oil and water are in nearly comparable amounts. All the ME systems were stabilized by cationic surfactant, cetyltrimethylammonium bromide (CTABr) and butanol as a cosurfactant. The second-order rate constants (k(2)) in the microemulsion media were also determined : over a phase volume (phi) of approximately 0.13-0.46. In order to explain the contribution of effective concentration of the nucleophiles in the aqueous pseudophase, corrected rate constants k(2 phi) = k(2)(1 - phi) were obtained, The rate constants of the corresponding hydrolytic reactions were also examined in CTABr micelles. While the DAAP catalysts were partitioned between the micellar and aqueous pseudophases in ME, the hydrophobic substrates were found to be mainly confined to oil-rich phases, Present results indicate that the main effect of ME media on the hydrolysis reaction is due,to both electrostatic reasons and substrate partitioning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A parallel matrix multiplication algorithm is presented, and studies of its performance and estimation are discussed. The algorithm is implemented on a network of transputers connected in a ring topology. An efficient scheme for partitioning the input matrices is introduced which enables overlapping computation with communication. This makes the algorithm achieve near-ideal speed-up for reasonably large matrices. Analytical expressions for the execution time of the algorithm have been derived by analysing its computation and communication characteristics. These expressions are validated by comparing the theoretical results of the performance with the experimental values obtained on a four-transputer network for both square and irregular matrices. The analytical model is also used to estimate the performance of the algorithm for a varying number of transputers and varying problem sizes. Although the algorithm is implemented on transputers, the methodology and the partitioning scheme presented in this paper are quite general and can be implemented on other processors which have the capability of overlapping computation with communication. The equations for performance prediction can also be extended to other multiprocessor systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The isothermal section of the phase diagram for the system NiO-MgO-SiO2 at 1373 K is established, The tie lines between (NiXMg1-X)O solid solution with rock salt structure and orthosilicate solid solution (NiYMg1-Y)Si0.5O2 and between orthosilicate and metasilicate (NiZMg1-Z)SiO3 crystalline solutions are determined using electron probe microanalysis (EPMA) and lattice parameter measurement on equilibrated samples, Although the monoxides and orthosilicates of Ni and Mg form a continuous range of solid solutions, the metasilicate phase exists only for 0 < Z < 0.096, The activity of NiO in the rock salt solid solution is determined as a function of composition and temperature in the range of 1023 to 1377 K using a solid state galvanic cell, The Gibbs energy of mixing of the monoxide solid solution can be expressed by a pseudo-subregular solution model: Delta G(ex) = X(1 - X)[(-2430 + 0.925T)X + (-5390 + 1.758T)(1 - X)] J/mol, The thermodynamic data for the rock salt phase are combined with information on interphase partitioning of Ni and Mg to generate the mixing properties for the orthosilicate and the metasilicate solid solutions, The regular solution model describes the orthosilicate and the metasilicate solid solutions at 1373 K within experimental uncertainties, The regular solution parameter Delta G(ex)/Y(1 - Y) is -820 (+/-70) J/mol for the orthosilicate solid solution, The corresponding value for the metasilicate solid solution is -220 (+/-150) J/mol, The derived activities for the orthosilicate solid solution are discussed in relation to the intracrystalline ion exchange equilibrium between M1 and M2 sites. The tie line information, in conjunction with the activity data for orthosilicate and metasilicate solid solutions, is used to calculate the Gibbs energy changes for the intercrystalline ion exchange reactions, Combining this with the known data for NiSi0.5O2, Gibbs energies of formation of MgSi0.5O2, MgSiO3, and metastable NiSiO3 are calculated, The Gibbs energy of formation of NiSiO3, from its component oxides, is equal to 7.67 (+/-0.6) kJ/mol at 1373 K.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uracil excision repair is ubiquitous in all domains of life and initiated by uracil DNA glycosylases (UDGs) which excise the promutagenic base, uracil, from DNA to leave behind an abasic site (AP-site). Repair of the resulting AP-sites requires an AP-endonuclease, a DNA polymerase, and a DNA ligase whose combined activities result in either short-patch or long-patch repair. Mycobacterium tuberculosis, the causative agent of tuberculosis, has an increased risk of accumulating uracils because of its G + C-rich genome, and its niche inside host macrophages where it is exposed to reactive nitrogen and oxygen species, two major causes of cytosine deamination (to uracil) in DNA. In vitro assays to study DNA repair in this important human pathogen are limited. To study uracil excision repair in mycobacteria, we have established assay conditions using cell-free extracts of M. tuberculosis and M. smegmatis (a fast-growing mycobacterium) and oligomer or plasmid DNA substrates. We show that in mycobacteria, uracil excision repair is completed primarily via long-patch repair. In addition, we show that M. tuberculosis UdgB, a newly characterized family 5 UDG, substitutes for the highly conserved family 1 UDG, Ung, thereby suggesting that UdgB might function as backup enzyme for uracil excision repair in mycobacteria. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper looks at the complexity of four different incremental problems. The following are the problems considered: (1) Interval partitioning of a flow graph (2) Breadth first search (BFS) of a directed graph (3) Lexicographic depth first search (DFS) of a directed graph (4) Constructing the postorder listing of the nodes of a binary tree. The last problem arises out of the need for incrementally computing the Sethi-Ullman (SU) ordering [1] of the subtrees of a tree after it has undergone changes of a given type. These problems are among those that claimed our attention in the process of our designing algorithmic techniques for incremental code generation. BFS and DFS have certainly numerous other applications, but as far as our work is concerned, incremental code generation is the common thread linking these problems. The study of the complexity of these problems is done from two different perspectives. In [2] is given the theory of incremental relative lower bounds (IRLB). We use this theory to derive the IRLBs of the first three problems. Then we use the notion of a bounded incremental algorithm [4] to prove the unboundedness of the fourth problem with respect to the locally persistent model of computation. Possibly, the lower bound result for lexicographic DFS is the most interesting. In [5] the author considers lexicographic DFS to be a problem for which the incremental version may require the recomputation of the entire solution from scratch. In that sense, our IRLB result provides further evidence for this possibility with the proviso that the incremental DFS algorithms considered be ones that do not require too much of preprocessing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parallel execution of computational mechanics codes requires efficient mesh-partitioning techniques. These mesh-partitioning techniques divide the mesh into specified number of submeshes of approximately the same size and at the same time, minimise the interface nodes of the submeshes. This paper describes a new mesh partitioning technique, employing Genetic Algorithms. The proposed algorithm operates on the deduced graph (dual or nodal graph) of the given finite element mesh rather than directly on the mesh itself. The algorithm works by first constructing a coarse graph approximation using an automatic graph coarsening method. The coarse graph is partitioned and the results are interpolated onto the original graph to initialise an optimisation of the graph partition problem. In practice, hierarchy of (usually more than two) graphs are used to obtain the final graph partition. The proposed partitioning algorithm is applied to graphs derived from unstructured finite element meshes describing practical engineering problems and also several example graphs related to finite element meshes given in the literature. The test results indicate that the proposed GA based graph partitioning algorithm generates high quality partitions and are superior to spectral and multilevel graph partitioning algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Amorphous thin films of different Al–Fe compositions were produced by plasma/vapor quenching during pulsed laser deposition. The chosen compositions Al72Fe28, Al40Fe60, and Al18Fe82 correspond to Al5Fe2 and B2-ordered AlFe intermetallic compounds and α–Fe solid solution, respectively. The films contained fine clusters that increased with iron content. The sequences of phase evolution observed in the heating stage transmission electron microscopy studies of the pulsed laser ablation deposited films of Al72Fe28, Al40Fe60, and Al18Fe82 compositions showed evidence of composition partitioning during crystallization for films of all three compositions. This composition partitioning, in turn, resulted in the evolution of phases of compositions richer in Fe, as well as richer in Al, compared to the overall film composition in each case. The evidence of Fe-rich phases was the B2 phase in Al72Fe28 film, the L12- and DO3-ordered phases in Al40Fe60 film, and the hexagonal ε–Fe in the case of the Al18Fe82 film. On the other hand, the Al-rich phases were Al13Fe4 for both Al72Fe28 and Al40Fe60 films and DO3 and Al5Fe2 phases in the case of Al18Fe82 film. We believe that this tendency of composition partitioning during crystallization from amorphous phase is a consequence of the tendency of clustering of the Fe atoms in the amorphous phase during nucleation. The body-centered cubic phase has a nucleation advantage over other metastable phases for all three compositions. The amorphization of Al18Fe82 composition and the evolution of L12 and ε–Fe phases in the Al–Fe system were new observations of this work.