902 resultados para Matrix of complex negotiation
Resumo:
Synthesis of complex metal oxides by the thermal decomposition of solid-solution precursors (formed by isomorphous compounds of component metals) has been investigated since the method enables mixing of cations on an atomic scale and drastically reduces diffusion distances to a few angstroms. Several interesting oxides such as Ca2Fe03,5C, aCoz04,C a2C0205a, nd Ca,FeCo05 have been prepared by this technique starting from carbonate solid solutions of the type Ca,-,Fe,C03, Cal-,Co,C03, and Ca,-,,M,M'yC03 (M, M' = Mn, Fe, Co). The method has been extended to oxalate solid-solution precursors, and the possibility of making use of other kinds of precursor solid solutions is indicated.
Resumo:
We incorporate various gold nanoparticles (AuNPs) capped with different ligands in two-dimensional films and three-dimensional aggregates derived from N-stearoyl-L-alanine and N-lauroyl-L-alanine, respectively. The assemblies of N-stearoyl-L-alanine afforded stable films at the air-water interface. More compact assemblies were formed upon incorporation of AuNPs in the air-water interface of N-stearoyl-L-alanine. We then examined the effects of incorporation of various AuNPs functionalized with different capping ligands in three-dimensional assemblies of N-lauroyl-L-alanine, a compound that formed a gel in hydrocarbons. The profound influence of nanoparticle incorporation into physical gels was evident from evaluation of various microscopic and bulk properties. The interaction of AuNPs with the gelator assembly was found to depend critically on the capping ligands protecting the Au surface of the gold nanoparticles. Transmission electron microscopy (TEM) showed a long-range directional assembly of certain AuNPs along the gel fibers. Scanning electron microscopy (SEM) images of the freeze-dried gels and nanocomposites indicate that the morphological transformation in the composite microstructures depends significantly on the capping agent of the nanoparticles. Differential scanning calorimetry (DSC) showed that gel formation from sol occurred at a lower temperature upon incorporation of AuNPs having capping ligands that were able to align and noncovalently interact with the gel fibers. Rheological studies indicate that the gel-nanoparticle composites exhibit significantly greater viscoelasticity compared to the native gel alone when the capping ligands are able to interact through interdigitation into the gelator assembly. Thus, it was possible to define a clear relationship between the materials and the molecular-level properties by means of manipulation of the information inscribed on the NP surface.
Resumo:
Public rental housing (PRH) projects are the mainstream of China's new affordable housing policies, and their integrated sustainability has a far-reaching effect on medium-low income families' well-being and social stability. However, there are few quantitative researches on the integrated sustainability of PRH projects. Our study tries to fill this gap through proposing an assessment model of the integrated sustainability for PRH projects. First, this paper defines what the sustainability of a PRH project is. Second, after constructing the sustainable system of a PRH project from the perspective of complex eco-system, the paper explores the internal operation mechanism and the coupling mechanism among the ecological, economic and social subsystems. Third, it identifies fourteen indices to represent the sustainability system of a PRH project, including six indices of ecological subsystem, five of economic subsystem and three of social subsystem. Fourth, it qualifies the weights of three subsystems and their internal representative indices. In addition, an assessment model is established through expert surveys and analytic network process (ANP). Finally, the paper carries out an empirical research on a PRH project in Nanjing city of China, followed by suggestions to enhance the integrated sustainability. The sustainability system and its evaluation model proposed in this paper are concise and easy to understand and can provide a theoretical foundation and a scientific basis for the evaluation and optimization of PRH projects.
Resumo:
There has been a recent spate of high profile infrastructure cost overruns in Australia and internationally. This is just the tip of a longer-term and more deeply-seated problem with initial budget estimating practice, well recognised in both academic research and industry reviews: the problem of uncertainty. A case study of the Sydney Opera House is used to identify and illustrate the key causal factors and system dynamics of cost overruns. It is conventionally the role of risk management to deal with such uncertainty, but the type and extent of the uncertainty involved in complex projects is shown to render established risk management techniques ineffective. This paper considers a radical advance on current budget estimating practice which involves a particular approach to statistical modelling complemented by explicit training in estimating practice. The statistical modelling approach combines the probability management techniques of Savage, which operate on actual distributions of values rather than flawed representations of distributions, and the data pooling technique of Skitmore, where the size of the reference set is optimised. Estimating training employs particular calibration development methods pioneered by Hubbard, which reduce the bias of experts caused by over-confidence and improve the consistency of subjective decision-making. A new framework for initial budget estimating practice is developed based on the combined statistical and training methods, with each technique being explained and discussed.
Resumo:
Skew correction of complex document images is a difficult task. We propose an edge-based connected component approach for robust skew correction of documents with complex layout and content. The algorithm essentially consists of two steps - an 'initialization' step to determine the image orientation from the centroids of the connected components and a 'search' step to find the actual skew of the image. During initialization, we choose two different sets of points regularly spaced across the the image, one from the left to right and the other from top to bottom. The image orientation is determined from the slope between the two succesive nearest neighbors of each of the points in the chosen set. The search step finds succesive nearest neighbors that satisfy the parameters obtained in the initialization step. The final skew is determined from the slopes obtained in the 'search' step. Unlike other connected component based methods, the proposed method does not require any binarization step that generally precedes connected component analysis. The method works well for scanned documents with complex layout of any skew with a precision of 0.5 degrees.
Resumo:
Modern database systems incorporate a query optimizer to identify the most efficient "query execution plan" for executing the declarative SQL queries submitted by users. A dynamic-programming-based approach is used to exhaustively enumerate the combinatorially large search space of plan alternatives and, using a cost model, to identify the optimal choice. While dynamic programming (DP) works very well for moderately complex queries with up to around a dozen base relations, it usually fails to scale beyond this stage due to its inherent exponential space and time complexity. Therefore, DP becomes practically infeasible for complex queries with a large number of base relations, such as those found in current decision-support and enterprise management applications. To address the above problem, a variety of approaches have been proposed in the literature. Some completely jettison the DP approach and resort to alternative techniques such as randomized algorithms, whereas others have retained DP by using heuristics to prune the search space to computationally manageable levels. In the latter class, a well-known strategy is "iterative dynamic programming" (IDP) wherein DP is employed bottom-up until it hits its feasibility limit, and then iteratively restarted with a significantly reduced subset of the execution plans currently under consideration. The experimental evaluation of IDP indicated that by appropriate choice of algorithmic parameters, it was possible to almost always obtain "good" (within a factor of twice of the optimal) plans, and in the few remaining cases, mostly "acceptable" (within an order of magnitude of the optimal) plans, and rarely, a "bad" plan. While IDP is certainly an innovative and powerful approach, we have found that there are a variety of common query frameworks wherein it can fail to consistently produce good plans, let alone the optimal choice. This is especially so when star or clique components are present, increasing the complexity of th- e join graphs. Worse, this shortcoming is exacerbated when the number of relations participating in the query is scaled upwards.
Resumo:
Space-time block codes based on orthogonal designs are used for wireless communications with multiple transmit antennas which can achieve full transmit diversity and have low decoding complexity. However, the rate of the square real/complex orthogonal designs tends to zero with increase in number of antennas, while it is possible to have a rate-1 real orthogonal design (ROD) for any number of antennas.In case of complex orthogonal designs (CODs), rate-1 codes exist only for 1 and 2 antennas. In general, For a transmit antennas, the maximal rate of a COD is 1/2 + l/n or 1/2 + 1/n+1 for n even or odd respectively. In this paper, we present a simple construction for maximal-rate CODs for any number of antennas from square CODs which resembles the construction of rate-1 RODs from square RODs. These designs are shown to be amenable for construction of a class of generalized CODs (called Coordinate-Interleaved Scaled CODs) with low peak-to-average power ratio (PAPR) having the same parameters as the maximal-rate codes. Simulation results indicate that these codes perform better than the existing maximal rate codes under peak power constraint while performing the same under average power constraint.
Resumo:
Analysis of EXAFS data of complex systems containing more than one phase and one type of coordination, has been discussed. It is shown that a modified treatment of EXAFS function as well as the amplitude ratio plots provide useful means of obtaining valuable structural information. The systems investigated are: biphasic Ni+NiO mixture, NiAl2O4 with two coordinations for Ni, NiO+NiAl2O4 mixture, CoS+CoO system and Ni dispersed on Al2O3. The results obtained with these systems have been most satisfactory and serve to illustrate the utility and the applicability of the innovations described in this paper.
Resumo:
We present a method for measuring the local velocities and first-order variations in velocities in a timevarying image. The scheme is an extension of the generalized gradient model that encompasses the local variation of velocity within a local patch of the image. Motion within a patch is analyzed in parallel by 42 different spatiotemporal filters derived from 6 linearly independent spatiotemporal kernels. No constraints are imposed on the image structure, and there is no need for smoothness constraints on the velocity field. The aperture problem does not arise so long as there is some two-dimensional structure in the patch being analyzed. Among the advantages of the scheme is that there is no requirement to calculate second or higher derivatives of the image function. This makes the scheme robust in the presence of noise. The spatiotemporal kernels are of simple form, involving Gaussian functions, and are biologically plausible receptive fields. The validity of the scheme is demonstrated by application to both synthetic and real video images sequences and by direct comparison with another recently published scheme Biol. Cybern. 63, 185 (1990)] for the measurement of complex optical flow.
Resumo:
We present a method for measuring the local velocities and first-order variations in velocities in a time-varying image. The scheme is an extension of the generalized gradient model that encompasses the local variation of velocity within a local patch of the image. Motion within a patch is analyzed in parallel by 42 different spatiotemporal filters derived from 6 linearly independent spatiotemporal kernels. No constraints are imposed on the image structure, and there is no need for smoothness constraints on the velocity field. The aperture problem does not arise so long as there is some two-dimensional structure in the patch being analyzed. Among the advantages of the scheme is that there is no requirement to calculate second or higher derivatives of the image function. This makes the scheme robust in the presence of noise. The spatiotemporal kernels are of simple form, involving Gaussian functions, and are biologically plausible receptive fields. The validity of the scheme is demonstrated by application to both synthetic and real video images sequences and by direct comparison with another recently published scheme [Biol. Cybern. 63, 185 (1990)] for the measurement of complex optical flow.
Resumo:
A computational scheme for determining the dynamic stiffness coefficients of a linear, inclined, translating and viscously/hysteretically damped cable element is outlined. Also taken into account is the coupling between inplane transverse and longitudinal forms of cable vibration. The scheme is based on conversion of the governing set of quasistatic boundary value problems into a larger equivalent set of initial value problems, which are subsequently numerically integrated in a spatial domain using marching algorithms. Numerical results which bring out the nature of the dynamic stiffness coefficients are presented. A specific example of random vibration analysis of a long span cable subjected to earthquake support motions modeled as vector gaussian random processes is also discussed. The approach presented is versatile and capable of handling many complicating effects in cable dynamics in a unified manner.
Resumo:
We present through the use of Petri Nets, modeling techniques for digital systems realizable using FPGAs. These Petri Net models are used for logic validation at the logic design phase. The technique is illustrated by modeling practical circuits. Further, the utility of the technique with respect to timing analysis of the modeled digital systems is considered. Copyright (C) 1997 Elsevier Science Ltd