10 resultados para successive linearization
em Greenwich Academic Literature Archive - UK
Resumo:
A method for selecting the order in which the users are detected in communication systems employing adaptive successive decision feedback multiuser detection is proposed. Systems employing channel coding without the assumption of perfect decision feedback are analyzed. The method is based on the mean squared error (MSE) measurements during a training period for each user. The analysis' shows that the method delivers BER performance improvement relative to other previously proposed ordering methods
Resumo:
A discretized series of events is a binary time series that indicates whether or not events of a point process in the line occur in successive intervals. Such data are common in environmental applications. We describe a class of models for them, based on an unobserved continuous-time discrete-state Markov process, which determines the rate of a doubly stochastic Poisson process, from which the binary time series is constructed by discretization. We discuss likelihood inference for these processes and their second-order properties and extend them to multiple series. An application involves modeling the times of exposures to air pollution at a number of receptors in Western Europe.
Resumo:
The conception of the FUELCON architecture, of a composite tool for the generation and validation of patterns for assigning fuel assemblies to the positions in the grid of a reactor core section, has undergone an evolution throughout the history of the project. Different options for various subtask were possible, envisioned, or actually explored or adopted. We project these successive, or even concomitant configurations of the architecture, into a meta-architecture, which quite not by chance happens to reflect basic choices in the field's history over the last decade.
Resumo:
Little attention has been given to the relation between fever and the severity of bronchiolitis. Therefore, the relation between fever and the clinical course of 90 infants (59 boys, 31 girls) hospitalised during one season with bronchiolitis was studied prospectively. Fever (defined as a single recording > 38.0°C or two successive recording > 37.8°C) was present in 28 infants. These infants were older (mean age, 5.3 v 4.0 months), had a longer mean hospital stay (4.2 v2.7 days), and a more severe clinical course (71.0%v 29.0%) than those infants without fever. Radiological abnormalities (collapse/consolidation) were found in 60.7% of the febrile group compared with 14.8% of the afebrile infants. These results suggest that monitoring of body temperature is important in bronchiolitis and that fever is likely to be associated with a more severe clinical course and radiological abnormalities.
Resumo:
The so-called dividing instant (DI) problem is an ancient historical puzzle encountered when attempting to represent what happens at the boundary instant which divides two successive states. The specification of such a problem requires a thorough exploration of the primitives of the temporal ontology and the corresponding time structure, as well as the conditions that the resulting temporal models must satisfy. The problem is closely related to the question of how to characterize the relationship between time periods with positive duration and time instants with no duration. It involves the characterization of the ‘closed’ and ‘open’ nature of time intervals, i.e. whether time intervals include their ending points or not. In the domain of artificial intelligence, the DI problem may be treated as an issue of how to represent different assumptions (or hypotheses) about the DI in a consistent way. In this paper, we shall examine various temporal models including those based solely on points, those based solely on intervals and those based on both points and intervals, and point out the corresponding DI problem with regard to each of these temporal models. We shall propose a classification of assumptions about the DI and provide a solution to the corresponding problem.
Resumo:
We describe a heuristic method for drawing graphs which uses a multilevel framework combined with a force-directed placement algorithm. The multilevel technique matches and coalesces pairs of adjacent vertices to define a new graph and is repeated recursively to create a hierarchy of increasingly coarse graphs, G0, G1, …, GL. The coarsest graph, GL, is then given an initial layout and the layout is refined and extended to all the graphs starting with the coarsest and ending with the original. At each successive change of level, l, the initial layout for Gl is taken from its coarser and smaller child graph, Gl+1, and refined using force-directed placement. In this way the multilevel framework both accelerates and appears to give a more global quality to the drawing. The algorithm can compute both 2 & 3 dimensional layouts and we demonstrate it on examples ranging in size from 10 to 225,000 vertices. It is also very fast and can compute a 2D layout of a sparse graph in around 12 seconds for a 10,000 vertex graph to around 5-7 minutes for the largest graphs. This is an order of magnitude faster than recent implementations of force-directed placement algorithms.
Resumo:
Flip chip interconnections using anisotropic conductive film (ACF) are now a very attractive technique for electronic packaging assembly. Although ACF is environmentally friendly, many factors may influence the reliability of the final ACF joint. External mechanical loading is one of these factors. Finite element analysis (FEA) was carried out to understand the effect of mechanical loading on the ACF joint. A 3-dimensional model of adhesively bonded flip chip assembly was built and simulations were performed for the 3-point bending test. The results show that the stress at its highest value at the corners, where the chip and ACF were connected together. The ACF thickness was increased at these corner regions. It was found that higher mechanical loading results in higher stress that causes a greater gap between the chip and the substrate at the corner position. Experimental work was also carried out to study the electrical reliability of the ACF joint with the applied bending load. As per the prediction from FEA, it was found that at first the corner joint failed. Successive open joints from the corner towards the middle were also noticed with the increase of the applied load.
Resumo:
In this paper, we shall critically examine a special class of graph matching algorithms that follow the approach of node-similarity measurement. A high-level algorithm framework, namely node-similarity graph matching framework (NSGM framework), is proposed, from which, many existing graph matching algorithms can be subsumed, including the eigen-decomposition method of Umeyama, the polynomial-transformation method of Almohamad, the hubs and authorities method of Kleinberg, and the kronecker product successive projection methods of Wyk, etc. In addition, improved algorithms can be developed from the NSGM framework with respects to the corresponding results in graph theory. As the observation, it is pointed out that, in general, any algorithm which can be subsumed from NSGM framework fails to work well for graphs with non-trivial auto-isomorphism structure.
Resumo:
The water sorption and desorption behaviour of three commercial polyacid-modified composite resins used in clinical dentistry have been studied in detail. Cured specimens of each material were subjected to two successive water uptake cycles in an atmosphere of 93% relative humidity, with one intervening desorption cycle in a desiccating atmosphere over concentrated sulfuric acid. Specimens were found to absorb and desorb water according Fick's law until Mt/M(infinity) values of approximately 0.5. Diffusion rates for uptake varied between cycles, ranging from 2.37-4.53 x 10(-9 )cm(2) s(-1) for 1st cycle to 0.85-2.72 x 10(-8 )cm(2 )s(-1) for 2nd cycle. Desorption rates were similar to those for 2nd cycle sorption, and ranged from 0.86 to 5.47 x 10(-8 )cm(2 )s(-1). Equilibration times for 1st cycle water uptake were greater than for 2nd cycle sorption and for desorption and overall the behaviour of polyacid-modified composites in a high humidity atmosphere was similar to that of conventional composites in water. It is concluded that the hydrophilic components of the former do not bring about an enhanced rate of water transport.