913 resultados para Nearest Neighbour


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have found the peculiar galaxy NGC 922 to be a new drop-through ring galaxy using multiwavelength (ultraviolet-radio) imaging and spectroscopic observations. Its 'C'-shaped morphology and tidal plume indicate a recent strong interaction with its companion which was identified with these observations. Using numerical simulations we demonstrate that the main properties of the system can be generated by a high-speed off-axis drop-through collision of a small galaxy with a larger disc system, thus making NGC 922 one of the nearest known collisional ring galaxies. While these systems are rare in the local Universe, recent deep Hubble Space Telescope images suggest they were more common in the early Universe.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We use series expansion methods to calculate the dispersion relation of the one-magnon excitations for the spin-(1)/(2) triangular-lattice nearest-neighbor Heisenberg antiferromagnet above a three-sublattice ordered ground state. Several striking features are observed compared to the classical (large-S) spin-wave spectra. Whereas, at low energies the dispersion is only weakly renormalized by quantum fluctuations, significant anomalies are observed at high energies. In particular, we find rotonlike minima at special wave vectors and strong downward renormalization in large parts of the Brillouin zone, leading to very flat or dispersionless modes. We present detailed comparison of our calculated excitation energies in the Brillouin zone with the spin-wave dispersion to order 1/S calculated recently by Starykh, Chubukov, and Abanov [Phys. Rev. B74, 180403(R) (2006)]. We find many common features but also some quantitative and qualitative differences. We show that at temperatures as low as 0.1J the thermally excited rotons make a significant contribution to the entropy. Consequently, unlike for the square lattice model, a nonlinear sigma model description of the finite-temperature properties is only applicable at temperatures < 0.1J. Finally, we review recent NMR measurements on the organic compound kappa-(BEDT-TTF)(2)Cu-2(CN)(3). We argue that these are inconsistent with long-range order and a description of the low-energy excitations in terms of interacting magnons, and that therefore a Heisenberg model with only nearest-neighbor exchange does not offer an adequate description of this material.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a scientific and technical description of the modelling framework and the main results of modelling the long-term average sediment delivery at hillslope to medium-scale catchments over the entire Murray Darling Basin (MDB). A theoretical development that relates long-term averaged sediment delivery to the statistics of rainfall and catchment parameters is presented. The derived flood frequency approach was adapted to investigate the problem of regionalization of the sediment delivery ratio (SDR) across the Basin. SDR, a measure of catchment response to the upland erosion rate, was modeled by two lumped linear stores arranged in series: hillslope transport to the nearest streams and flow routing in the channel network. The theory shows that the ratio of catchment sediment residence time (SRT) to average effective rainfall duration is the most important control in the sediment delivery processes. In this study, catchment SRTs were estimated using travel time for overland flow multiplied by an enlargement factor which is a function of particle size. Rainfall intensity and effective duration statistics were regionalized by using long-term measurements from 195 pluviograph sites within and around the Basin. Finally, the model was implemented across the MDB by using spatially distributed soil, vegetation, topographical and land use properties under Geographic Information System (GIs) environment. The results predict strong variations in SDR from close to 0 in floodplains to 70% in the eastern uplands of the Basin. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Finding single pair shortest paths on surface is a fundamental problem in various domains, like Geographic Information Systems (GIS) 3D applications, robotic path planning system, and surface nearest neighbor query in spatial database, etc. Currently, to solve the problem, existing algorithms must traverse the entire polyhedral surface. With the rapid advance in areas like Global Positioning System (CPS), Computer Aided Design (CAD) systems and laser range scanner, surface models axe becoming more and more complex. It is not uncommon that a surface model contains millions of polygons. The single pair shortest path problem is getting harder and harder to solve. Based on the observation that the single pair shortest path is in the locality, we propose in this paper efficient methods by excluding part of the surface model without considering them in the search process. Three novel expansion-based algorithms are proposed, namely, Naive algorithm, Rectangle-based Algorithm and Ellipse-based Algorithm. Each algorithm uses a two-step approach to find the shortest path. (1) compute an initial local path. (2) use the value of this initial path to select a search region, in which the global shortest path exists. The search process terminates once the global optimum criteria are satisfied. By reducing the searching region, the performance is improved dramatically in most cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present an efficient k-Means clustering algorithm for two dimensional data. The proposed algorithm re-organizes dataset into a form of nested binary tree*. Data items are compared at each node with only two nearest means with respect to each dimension and assigned to the one that has the closer mean. The main intuition of our research is as follows: We build the nested binary tree. Then we scan the data in raster order by in-order traversal of the tree. Lastly we compare data item at each node to the only two nearest means to assign the value to the intendant cluster. In this way we are able to save the computational cost significantly by reducing the number of comparisons with means and also by the least use to Euclidian distance formula. Our results showed that our method can perform clustering operation much faster than the classical ones. © Springer-Verlag Berlin Heidelberg 2005

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A k-NN query finds the k nearest-neighbors of a given point from a point database. When it is sufficient to measure object distance using the Euclidian distance, the key to efficient k-NN query processing is to fetch and check the distances of a minimum number of points from the database. For many applications, such as vehicle movement along road networks or rover and animal movement along terrain surfaces, the distance is only meaningful when it is along a valid movement path. For this type of k-NN queries, the focus of efficient query processing is to minimize the cost of computing distances using the environment data (such as the road network data and the terrain data), which can be several orders of magnitude larger than that of the point data. Efficient processing of k-NN queries based on the Euclidian distance or the road network distance has been investigated extensively in the past. In this paper, we investigate the problem of surface k-NN query processing, where the distance is calculated from the shortest path along a terrain surface. This problem is very challenging, as the terrain data can be very large and the computational cost of finding shortest paths is very high. We propose an efficient solution based on multiresolution terrain models. Our approach eliminates the need of costly process of finding shortest paths by ranking objects using estimated lower and upper bounds of distance on multiresolution terrain models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a novel, maximum-likelihood (ML), lattice-decoding algorithm for noncoherent block detection of QAM signals. The computational complexity is polynomial in the block length; making it feasible for implementation compared with the exhaustive search ML detector. The algorithm works by enumerating the nearest neighbor regions for a plane defined by the received vector; in a conceptually similar manner to sphere decoding. Simulations show that the new algorithm significantly outperforms existing approaches

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O presente estudo exegético tem por objetivo analisar o fenômeno da profecia e da glossolalia no cristianismo primitivo a partir da Primeira Carta aos Coríntios. Para tanto, revisa-se algumas discussões exegéticas acerca do texto. O movimento cristão emergiu como uma seita judaica, mas amadureceu em um contexto greco-romano, sendo profundamente impactado pela cultura e tradições ocidentais. Por um lado, sofreu as influências das tradições israelitas antigas e do Judaísmo do Segundo Templo, e por outro, sofreu as influências das tradições greco-romanas, embora em menor grau. Com isso, esta pesquisa mostra que a profecia e a glossolalia em 1° Coríntios são fenômenos extáticos, no qual seu contexto mais próximo é o misticismo apocalíptico judaico.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Os missionários protestantes presbiterianos que vieram para o Brasil no início da segunda metade do século XIX trouxeram uma interpretação calvinista da bíblia, pois permaneceram fieis à formação princetoniana que efetivou uma síntese entre ortodoxia calvinista e pietismo. Estes pricetonianos tinham como base epistemológica a filosofia de Thomas Reid, conhecida como o Realismo do Senso Comum. Essa filosofia é utilizada como uma epistemologia reformada, ou calvinista. Ela é compreendida em sua formação escocesa e consequentemente americana, via Princeton, como a Epistemologia Providencial. Desta forma, quando ela é assimilada pelos brasileiros por meio da pregação e da formação teológica, a mesma se torna parte do perfil presbiteriano brasileiro como doutrina filosófica. A Filosofia do Senso Comum se gesta como crítica à filosofia empirista de David Hume que, para Reid, convergiria para um possível aniquilamento da religião e para uma visão pessimista da ciência, afetando o empirismo, por conseguinte, causando uma nova formulação mais próxima do ceticismo. Por isso, Reid formulou a filosofia que para ele contrapõe-se a Locke e Berkeley e depois a David Hume, afirmando que a realidade é independente de nossa apreensão. Ou seja, na percepção do mundo exterior não há interferência do sujeito cognoscente sobre o objeto do conhecimento. A nossa relação com os objetos é direta e não deve ser desvirtuada por intermediações. Na implantação do protestantismo no Brasil, via missionários de Princeton, não houve uma defesa intransigente dos princípios calvinistas por parte de missionários como Fletcher e Simonton e sim uma continuidade da leitura das escrituras sagradas pelo viés calvinista, como era feito no Seminário de Princeton. Não havia uma ênfase acentuada na defesa da ortodoxia porque o tema do liberalismo teológico, ou do conflito entre modernismo e fundamentalismo não se fazia necessário na conjuntura local, onde predominava a preocupação pela evangelização em termos práticos. O conceitos da Filosofia do Senso Comum eram próximos do empirismo mitigado de Silvestre Pinheiro e do Ecletismo de Victor Cousin. Por isso, no Brasil, o local em que mais se vê a utilização da filosofia do Senso Comum é nos debates entre intelectuais, em três pontos interessantes: 1ª) O Senso Comum ficou restrito ao espaço acadêmico, na formação de novos pastores, sendo que as obras de Charles Hodge e A. A. Hodge são as principais fontes de implantação desta mentalidade ratificadora da experiência religiosa e, desta forma, delineiam o rosto do protestantismo entre presbiterianos, uma das principais denominações protestantes do final do século XIX; 2ª) Nos debates entre clérigos católicos e protestantes em polêmicas teológicas;. 3º) No aproveitamento utilitarista da assimilação cultural estrangeira pelos protestantes nacionais, não por último, facilitada pela simpatia dos liberais brasileiros pelo protestantismo, ao mesmo tempo que mantinham uma linha filosófica mais próxima do empirismo mitigado e do ecletismo. Assim, nossa hipótese pretende demonstrar que os protestantes trouxeram em seu bojo as formulações epistemológicas que foram passadas para um grupo de intelectuais, que formaram o quadro dos primeiros pastores presbiterianos da história desta denominação. Eles foram convertidos e assimilaram melhor as novas doutrinas por meio de mais do que simples pregações, mas pela sua forma filosófica de encarar os objetos estudados, e que tais informações vêm por meio da base epistemológica do Realismo do Senso Comum, que encontra espaço nos ideais republicanos brasileiros do século XIX.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The whole set of the nickel(II) complexes with no derivatized edta-type hexadentate ligands has been investigated from their structural and electronic properties. Two more complexes have been prepared in order to complete the whole set: trans(O5)-[Ni(ED3AP)]2- and trans(O5O6)-[Ni(EDA3P)]2- complexes. trans(O5) geometry has been verified crystallographically and trans(O5O6) geometry of the second complex has been predicted by the DFT theory and spectral analysis. Mutual dependance has been established between: the number of the five-membered carboxylate rings, octahedral/tetrahedral deviation of metal-ligand/nitrogen-neighbour-atom angles and charge-transfer energies (CTE) calculated by the Morokuma’s energetic decomposition analysis; energy of the absorption bands and HOMO–LUMO gap.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This special issue of the Journal of the Operational Research Society is dedicated to papers on the related subjects of knowledge management and intellectual capital. These subjects continue to generate considerable interest amongst both practitioners and academics. This issue demonstrates that operational researchers have many contributions to offer to the area, especially by bringing multi-disciplinary, integrated and holistic perspectives. The papers included are both theoretical as well as practical, and include a number of case studies showing how knowledge management has been implemented in practice that may assist other organisations in their search for a better means of managing what is now recognised as a core organisational activity. It has been accepted by a growing number of organisations that the precise handling of information and knowledge is a significant factor in facilitating their success but that there is a challenge in how to implement a strategy and processes for this handling. It is here, in the particular area of knowledge process handling that we can see the contributions of operational researchers most clearly as is illustrated in the papers included in this journal edition. The issue comprises nine papers, contributed by authors based in eight different countries on five continents. Lind and Seigerroth describe an approach that they call team-based reconstruction, intended to help articulate knowledge in a particular organisational. context. They illustrate the use of this approach with three case studies, two in manufacturing and one in public sector health care. Different ways of carrying out reconstruction are analysed, and the benefits of team-based reconstruction are established. Edwards and Kidd, and Connell, Powell and Klein both concentrate on knowledge transfer. Edwards and Kidd discuss the issues involved in transferring knowledge across frontières (borders) of various kinds, from those borders within organisations to those between countries. They present two examples, one in distribution and the other in manufacturing. They conclude that trust and culture both play an important part in facilitating such transfers, that IT should be kept in a supporting role in knowledge management projects, and that a staged approach to this IT support may be the most effective. Connell, Powell and Klein consider the oft-quoted distinction between explicit and tacit knowledge, and argue that such a distinction is sometimes unhelpful. They suggest that knowledge should rather be regarded as a holistic systemic property. The consequences of this for knowledge transfer are examined, with a particular emphasis on what this might mean for the practice of OR Their view of OR in the context of knowledge management very much echoes Lind and Seigerroth's focus on knowledge for human action. This is an interesting convergence of views given that, broadly speaking, one set of authors comes from within the OR community, and the other from outside it. Hafeez and Abdelmeguid present the nearest to a 'hard' OR contribution of the papers in this special issue. In their paper they construct and use system dynamics models to investigate alternative ways in which an organisation might close a knowledge gap or skills gap. The methods they use have the potential to be generalised to any other quantifiable aspects of intellectual capital. The contribution by Revilla, Sarkis and Modrego is also at the 'hard' end of the spectrum. They evaluate the performance of public–private research collaborations in Spain, using an approach based on data envelopment analysis. They found that larger organisations tended to perform relatively better than smaller ones, even though the approach used takes into account scale effects. Perhaps more interesting was that many factors that might have been thought relevant, such as the organisation's existing knowledge base or how widely applicable the results of the project would be, had no significant effect on the performance. It may be that how well the partnership between the collaborators works (not a factor it was possible to take into account in this study) is more important than most other factors. Mak and Ramaprasad introduce the concept of a knowledge supply network. This builds on existing ideas of supply chain management, but also integrates the design chain and the marketing chain, to address all the intellectual property connected with the network as a whole. The authors regard the knowledge supply network as the natural focus for considering knowledge management issues. They propose seven criteria for evaluating knowledge supply network architecture, and illustrate their argument with an example from the electronics industry—integrated circuit design and fabrication. In the paper by Hasan and Crawford, their interest lies in the holistic approach to knowledge management. They demonstrate their argument—that there is no simple IT solution for organisational knowledge management efforts—through two case study investigations. These case studies, in Australian universities, are investigated through cultural historical activity theory, which focuses the study on the activities that are carried out by people in support of their interpretations of their role, the opportunities available and the organisation's purpose. Human activities, it is argued, are mediated by the available tools, including IT and IS and in this particular context, KMS. It is this argument that places the available technology into the knowledge activity process and permits the future design of KMS to be improved through the lessons learnt by studying these knowledge activity systems in practice. Wijnhoven concentrates on knowledge management at the operational level of the organisation. He is concerned with studying the transformation of certain inputs to outputs—the operations function—and the consequent realisation of organisational goals via the management of these operations. He argues that the inputs and outputs of this process in the context of knowledge management are different types of knowledge and names the operation method the knowledge logistics. The method of transformation he calls learning. This theoretical paper discusses the operational management of four types of knowledge objects—explicit understanding; information; skills; and norms and values; and shows how through the proposed framework learning can transfer these objects to clients in a logistical process without a major transformation in content. Millie Kwan continues this theme with a paper about process-oriented knowledge management. In her case study she discusses an implementation of knowledge management where the knowledge is centred around an organisational process and the mission, rationale and objectives of the process define the scope of the project. In her case they are concerned with the effective use of real estate (property and buildings) within a Fortune 100 company. In order to manage the knowledge about this property and the process by which the best 'deal' for internal customers and the overall company was reached, a KMS was devised. She argues that process knowledge is a source of core competence and thus needs to be strategically managed. Finally, you may also wish to read a related paper originally submitted for this Special Issue, 'Customer knowledge management' by Garcia-Murillo and Annabi, which was published in the August 2002 issue of the Journal of the Operational Research Society, 53(8), 875–884.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The distribution of finished products from depots to customers is a practical and challenging problem in logistics management. Better routing and scheduling decisions can result in higher level of customer satisfaction because more customers can be served in a shorter time. The distribution problem is generally formulated as the vehicle routing problem (VRP). Nevertheless, there is a rigid assumption that there is only one depot. In cases, for instance, where a logistics company has more than one depot, the VRP is not suitable. To resolve this limitation, this paper focuses on the VRP with multiple depots, or multi-depot VRP (MDVRP). The MDVRP is NP-hard, which means that an efficient algorithm for solving the problem to optimality is unavailable. To deal with the problem efficiently, two hybrid genetic algorithms (HGAs) are developed in this paper. The major difference between the HGAs is that the initial solutions are generated randomly in HGA1. The Clarke and Wright saving method and the nearest neighbor heuristic are incorporated into HGA2 for the initialization procedure. A computational study is carried out to compare the algorithms with different problem sizes. It is proved that the performance of HGA2 is superior to that of HGA1 in terms of the total delivery time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents two hybrid genetic algorithms (HGAs) to optimize the component placement operation for the collect-and-place machines in printed circuit board (PCB) assembly. The component placement problem is to optimize (i) the assignment of components to a movable revolver head or assembly tour, (ii) the sequence of component placements on a stationary PCB in each tour, and (iii) the arrangement of component types to stationary feeders simultaneously. The objective of the problem is to minimize the total traveling time spent by the revolver head for assembling all components on the PCB. The major difference between the HGAs is that the initial solutions are generated randomly in HGA1. The Clarke and Wright saving method, the nearest neighbor heuristic, and the neighborhood frequency heuristic are incorporated into HGA2 for the initialization procedure. A computational study is carried out to compare the algorithms with different population sizes. It is proved that the performance of HGA2 is superior to HGA1 in terms of the total assembly time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A chip shooter machine for electronic component assembly has a movable feeder carrier, a movable X–Y table carrying a printed circuit board (PCB), and a rotary turret with multiple assembly heads. This paper presents a hybrid genetic algorithm (HGA) to optimize the sequence of component placements and the arrangement of component types to feeders simultaneously for a chip shooter machine, that is, the component scheduling problem. The objective of the problem is to minimize the total assembly time. The GA developed in the paper hybridizes different search heuristics including the nearest-neighbor heuristic, the 2-opt heuristic, and an iterated swap procedure, which is a new improved heuristic. Compared with the results obtained by other researchers, the performance of the HGA is superior in terms of the assembly time. Scope and purpose When assembling the surface mount components on a PCB, it is necessary to obtain the optimal sequence of component placements and the best arrangement of component types to feeders simultaneously in order to minimize the total assembly time. Since it is very difficult to obtain the optimality, a GA hybridized with several search heuristics is developed. The type of machines being studied is the chip shooter machine. This paper compares the algorithm with a simple GA. It shows that the performance of the algorithm is superior to that of the simple GA in terms of the total assembly time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A chip shooter machine for electronic components assembly has a movable feeder carrier holding components, a movable X-Y table carrying a printed circuit board (PCB), and a rotary turret having multiple assembly heads. This paper presents a hybrid genetic algorithm to optimize the sequence of component placements for a chip shooter machine. The objective of the problem is to minimize the total traveling distance of the X-Y table or the board. The genetic algorithm developed in the paper hybridizes the nearest neighbor heuristic, and an iterated swap procedure, which is a new improved heuristic. We have compared the performance of the hybrid genetic algorithm with that of the approach proposed by other researchers and have demonstrated our algorithm is superior in terms of the distance traveled by the X-Y table or the board.