8 resultados para non-trivial data structures
em Greenwich Academic Literature Archive - UK
Resumo:
Network analysis is distinguished from traditional social science by the dyadic nature of the standard data set. Whereas in traditional social science we study monadic attributes of individuals, in network analysis we study dyadic attributes of pairs of individuals. These dyadic attributes (e.g. social relations) may be represented in matrix form by a square 1-mode matrix. In contrast, the data in traditional social science are represented as 2-mode matrices. However, network analysis is not completely divorced from traditional social science, and often has occasion to collect and analyze 2-mode matrices. Furthermore, some of the methods developed in network analysis have uses in analysing non-network data. This paper presents and discusses ways of applying and interpreting traditional network analytic techniques to 2-mode data, as well as developing new techniques. Three areas are covered in detail: displaying 2-mode data as networks, detecting clusters and measuring centrality.
Resumo:
Traffic policing and bandwidth management strategies at the User Network Interface (UNI) of an ATM network are investigated by simulation. The network is assumed to transport real time (RT) traffic like voice and video as well as non-real time (non-RT) data traffic. The proposed policing function, called the super leaky bucket (S-LB), is based on the leaky bucket (LB), but handles the three types of traffic differently according to their quality of service (QoS) requirements. Separate queues are maintained for RT and non-RT traffic. They are normally served alternately, but if the number of RT cells exceeds a threshold, it gets non-pre-emptive priority. Further increase of the RT queue causes low priority cells to be discarded. Non-RT cells are buffered and the sources are throttled back during periods of congestion. The simulations clearly demonstrate the advantages of the proposed strategy in providing improved levels of service (delay, jitter and loss) for all types of traffic.
Resumo:
The most common parallelisation strategy for many Computational Mechanics (CM) (typified by Computational Fluid Dynamics (CFD) applications) which use structured meshes, involves a 1D partition based upon slabs of cells. However, many CFD codes employ pipeline operations in their solution procedure. For parallelised versions of such codes to scale well they must employ two (or more) dimensional partitions. This paper describes an algorithmic approach to the multi-dimensional mesh partitioning in code parallelisation, its implementation in a toolkit for almost automatically transforming scalar codes to parallel form, and its testing on a range of ‘real-world’ FORTRAN codes. The concept of multi-dimensional partitioning is straightforward, but non-trivial to represent as a sufficiently generic algorithm so that it can be embedded in a code transformation tool. The results of the tests on fine real-world codes demonstrate clear improvements in parallel performance and scalability (over a 1D partition). This is matched by a huge reduction in the time required to develop the parallel versions when hand coded – from weeks/months down to hours/days.
Resumo:
In this paper, we shall critically examine a special class of graph matching algorithms that follow the approach of node-similarity measurement. A high-level algorithm framework, namely node-similarity graph matching framework (NSGM framework), is proposed, from which, many existing graph matching algorithms can be subsumed, including the eigen-decomposition method of Umeyama, the polynomial-transformation method of Almohamad, the hubs and authorities method of Kleinberg, and the kronecker product successive projection methods of Wyk, etc. In addition, improved algorithms can be developed from the NSGM framework with respects to the corresponding results in graph theory. As the observation, it is pointed out that, in general, any algorithm which can be subsumed from NSGM framework fails to work well for graphs with non-trivial auto-isomorphism structure.
Resumo:
This paper presents an investigation into applying Case-Based Reasoning to Multiple Heterogeneous Case Bases using agents. The adaptive CBR process and the architecture of the system are presented. A case study is presented to illustrate and evaluate the approach. The process of creating and maintaining the dynamic data structures is discussed. The similarity metrics employed by the system are used to support the process of optimisation of the collaboration between the agents which is based on the use of a blackboard architecture. The blackboard architecture is shown to support the efficient collaboration between the agents to achieve an efficient overall CBR solution, while using case-based reasoning methods to allow the overall system to adapt and “learn” new collaborative strategies for achieving the aims of the overall CBR problem solving process.
Resumo:
The X-ray crystal structures of (I), the base 4030W92, 5-(2,3-dichlorophenyl)-2,4-diamino-6-fluoromethyl-pyrimidine, C11H9Cl2FN4, and (II) 227C89, the methanesulphonic acid salt of 5-(2,6-dichlorophenyl)-1-H-2,4-diamino-6-methyl-pyrimidine, C11H11Cl2N4 center dot CH3O3S, have been carried out at low temperature. A detailed comparison of the two structures is given. Structure (I) is non-centrosymmetric, crystallizing in space group P2(1) with unit cell a = 10.821(3), b = 8.290(3), c = 13.819(4) angstrom, beta = 105.980(6)degrees, V = 1191.8(6) angstrom(3), Z = 4 (two molecules per asymmetric unit) and density (calculated) = 1.600 mg/m(3). Structure (II) crystallizes in the triclinic space group P (1) over bar with unit cell a = 7.686(2), b = 8.233(2), c = 12.234(2) angstrom, alpha = 78.379(4), beta = 87.195(4), gamma = 86.811(4)degrees, V = 756.6(2) angstrom(3), Z = 2, density (calculated) = 1.603 mg/m(3). Final R indices [I > 2sigma(I)] are R1 = 0.0572, wR2 = 0.1003 for (I) and R1 = 0.0558, wR2 = 0.0982 for (II). R indices (all data) are R1 = 0.0983, wR2 = 0.1116 for (I) and R1 = 0.1009, wR2 = 0.1117 for (II). 5- Phenyl-2,4 diaminopyrimidine and 6-phenyl-1,2,4 triazine derivatives, which include lamotrigine (3,5-diamino-6-(2,3-dichlorophenyl)-1,2,4-triazine), have been investigated for some time for their effects on the central nervous system. The three dimensional structures reported here form part of a newly developed data base for the detailed investigation of members of this structural series and their biological activities.
Resumo:
The corrosion of steel reinforcement bars in reinforced concrete structures exposed to severe marine environments usually is attributed to the aggressive nature of chloride ions. In some cases in practice corrosion has been observed to commence already within a few years of exposure even with considerable concrete cover to the reinforcement and apparently high quality concretes. However, there are a number of other cases in practice for which corrosion initiation took much longer, even in cases with quite modest concrete cover and modest concrete quality. Many of these structures show satisfactory long-term structural performance, despite having high levels of localized chloride concentrations at the reinforcement. This disparity was noted already more than 50 years ago, but appears still not fully explained. This paper presents a systematic overview of cases reported in the engineering and corrosion literature and considers possible reasons for these differences. Consistent with observations by others, the data show that concretes made from blast furnace cements have better corrosion durability properties. The data also strongly suggest that concretes made with limestone or non-reactive dolomite aggregates or sufficiently high levels of other forms of calcium carbonates have favourable reinforcement corrosion properties. Both corrosion initiation and the onset of significant damage are delayed. Some possible reasons for this are explored briefly.
Resumo:
This paper presents an approach for detecting local damage in large scale frame structures by utilizing regularization methods for ill-posed problems. A direct relationship between the change in stiffness caused by local damage and the measured modal data for the damaged structure is developed, based on the perturbation method for structural dynamic systems. Thus, the measured incomplete modal data can be directly adopted in damage identification without requiring model reduction techniques, and common regularization methods could be effectively employed to solve the developed equations. Damage indicators are appropriately chosen to reflect both the location and severity of local damage in individual components of frame structures such as in brace members and at beam-column joints. The Truncated Singular Value Decomposition solution incorporating the Generalized Cross Validation method is introduced to evaluate the damage indicators for the cases when realistic errors exist in modal data measurements. Results for a 16-story building model structure show that structural damage can be correctly identified at detailed level using only limited information on the measured noisy modal data for the damaged structure.