31 resultados para REDUNDANT
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Cybr (also known as Cytip, CASP, and PSCDBP) is an interleukin-12-induced gene expressed exclusively in hematopoietic cells and tissues that associates with Arf guanine nucleotide exchange factors known as cytohesins. Cybr levels are dynamically regulated during T-cell development in the thymus and upon activation of peripheral T cells. In addition, Cybr is induced in activated dendritic cells and has been reported to regulate dendritic cell (DC)-T-cell adhesion. Here we report the generation and characterization of Cybr-deficient mice. Despite the selective expression in hematopoietic cells, there was no intrinsic defect in T- or B-cell development or function in Cybr-deficient mice. The adoptive transfer of Cybr-deficient DCs showed that they migrated efficiently and stimulated proliferation and cytokine production by T cells in vivo. However, competitive stem cell repopulation experiments showed a defect in the abilities of Cybr-deficient T cells to develop in the presence of wild-type precursors. These data suggest that Cybr is not absolutely required for hematopoietic cell development or function, but stem cells lacking Cybr are at a developmental disadvantage compared to wild-type cells. Collectively, these data demonstrate that despite its selective expression in hematopoietic cells, the role of Cybr is limited or largely redundant. Previous in vitro studies using overexpression or short interfering RNA inhibition of the levels of Cybr protein appear to have overestimated its immunological role.
Resumo:
PURPOSE. To examine internal consistency, refine the response scale, and obtain a linear scoring system for the visual function instrument, the Daily Living Tasks Dependent on Vision (DLTV). METHODS. Data were available from 186 participants with a clinical diagnosis of AMD who completed the 22-item DLTV (DLTV-22) according to four-point ordinal response scale. An independent group of 386 participants with AMD were administered a reduced version of the DLTV with 11 items (DLTV-11), according to a five-point response scale. Rasch analysis was performed on both datasets and used to generate item statistics for measure order, response odds ratios per item and per person, and infit and outfit mean square statistics. The Rasch output from the DLTV-22 was examined to identify redundant items and for factorial validity and person item measure separation reliabilities. RESULTS. The average rating for the DLTV-22 changed monotonically with the magnitude of the latent person trait. The expected versus observed average measures were extremely close, with step calibrations evenly separated for the four-point ordinal scale. In the case of the DLTV-11, step calibrations were not as evenly separated, suggesting that the five-point scale should be reduced to either a four- or three-point scale. Five items in the DLTV-22 were removed, and all 17 remaining items had good infit and outfit mean squares. PCA with residuals from Rasch analysis identified two domains containing 7 and 10 items each. The domains had high person separation reliabilities (0.86 and 0.77 for domains 1 and 2, respectively) and item measure reliabilities (0.99 and 0.98 for domains 1 and 2, respectively). CONCLUSIONS. With the improved internal consistency, establishment of the accuracy and precision of the rating scale for the DLTV and the establishment of a valid domain structure we believe that it constitutes a useful instrument for assessing visual function in older adults with age-related macular degeneration.
Resumo:
This paper presents two new approaches for use in complete process monitoring. The firstconcerns the identification of nonlinear principal component models. This involves the application of linear
principal component analysis (PCA), prior to the identification of a modified autoassociative neural network (AAN) as the required nonlinear PCA (NLPCA) model. The benefits are that (i) the number of the reduced set of linear principal components (PCs) is smaller than the number of recorded process variables, and (ii) the set of PCs is better conditioned as redundant information is removed. The result is a new set of input data for a modified neural representation, referred to as a T2T network. The T2T NLPCA model is then used for complete process monitoring, involving fault detection, identification and isolation. The second approach introduces a new variable reconstruction algorithm, developed from the T2T NLPCA model. Variable reconstruction can enhance the findings of the contribution charts still widely used in industry by reconstructing the outputs from faulty sensors to produce more accurate fault isolation. These ideas are illustrated using recorded industrial data relating to developing cracks in an industrial glass melter process. A comparison of linear and nonlinear models, together with the combined use of contribution charts and variable reconstruction, is presented.
Resumo:
This paper describes the application of multivariate regression techniques to the Tennessee Eastman benchmark process for modelling and fault detection. Two methods are applied : linear partial least squares, and a nonlinear variant of this procedure using a radial basis function inner relation. The performance of the RBF networks is enhanced through the use of a recently developed training algorithm which uses quasi-Newton optimization to ensure an efficient and parsimonious network; details of this algorithm can be found in this paper. The PLS and PLS/RBF methods are then used to create on-line inferential models of delayed process measurements. As these measurements relate to the final product composition, these models suggest that on-line statistical quality control analysis should be possible for this plant. The generation of `soft sensors' for these measurements has the further effect of introducing a redundant element into the system, redundancy which can then be used to generate a fault detection and isolation scheme for these sensors. This is achieved by arranging the sensors and models in a manner comparable to the dedicated estimator scheme of Clarke et al. 1975, IEEE Trans. Pero. Elect. Sys., AES-14R, 465-473. The effectiveness of this scheme is demonstrated on a series of simulated sensor and process faults, with full detection and isolation shown to be possible for sensor malfunctions, and detection feasible in the case of process faults. Suggestions for enhancing the diagnostic capacity in the latter case are covered towards the end of the paper.
Resumo:
The heterodimeric cytokine IL-23 plays a non-redundant function in the development of cell-mediated, organspecific autoimmune diseases such as experimental autoimmune encephalomyelitis (EAE). To further characterize the mechanisms of action of IL-23 in autoimmune inflammation, we administered IL-23 systemically at different time points during both relapsing and chronic EAE. Surprisingly, we found suppression of disease in all treatment protocols. We observed a reduction in the number of activated macrophages and microglia in the CNS, while T cell infiltration was not significantly affected. Disease suppression correlated with reduced expansion of myelin-reactive T cells, loss of T-bet expression, loss of lymphoid structures, and increased production of IL-6 and IL-4. Here we describe an unexpected function of exogenous IL-23 in limiting the scope and extent of organ-specific autoimmunity.
Resumo:
Baited cameras are often used for abundance estimation wherever alternative techniques are precluded, e.g. in abyssal systems and areas such as reefs. This method has thus far used models of the arrival process that are deterministic and, therefore, permit no estimate of precision.
Furthermore, errors due to multiple counting of fish and missing those not seen by the camera have restricted the technique to using only the time of first arrival, leaving a lot of data redundant. Here, we reformulate the arrival process using a stochastic model, which allows the precision of abundance
estimates to be quantified. Assuming a non-gregarious, cross-current-scavenging fish, we show that prediction of abundance from first arrival time is extremely uncertain. Using example data, we show
that simple regression-based prediction from the initial (rising) slope of numbers at the bait gives good precision, accepting certain assumptions. The most precise abundance estimates were obtained
by including the declining phase of the time series, using a simple model of departures, and taking account of scavengers beyond the camera’s view, using a hidden Markov model.
Resumo:
As a class of defects in software requirements specification, inconsistency has been widely studied in both requirements engineering and software engineering. It has been increasingly recognized that maintaining consistency alone often results in some other types of non-canonical requirements, including incompleteness of a requirements specification, vague requirements statements, and redundant requirements statements. It is therefore desirable for inconsistency handling to take into account the related non-canonical requirements in requirements engineering. To address this issue, we propose an intuitive generalization of logical techniques for handling inconsistency to those that are suitable for managing non-canonical requirements, which deals with incompleteness and redundancy, in addition to inconsistency. We first argue that measuring non-canonical requirements plays a crucial role in handling them effectively. We then present a measure-driven logic framework for managing non-canonical requirements. The framework consists of five main parts, identifying non-canonical requirements, measuring them, generating candidate proposals for handling them, choosing commonly acceptable proposals, and revising them according to the chosen proposals. This generalization can be considered as an attempt to handle non-canonical requirements along with logic-based inconsistency handling in requirements engineering.
Resumo:
In an age of depleting oil reserves and increasing energy demand, humanity faces a stalemate between environmentalism and politics, where crude oil is traded at record highs yet the spotlight on being ‘green’ and sustainable is stronger than ever. A key theme on today’s political agenda is energy independence from foreign nations, and the United Kingdom is bracing itself for nuclear renaissance which is hoped will feed the rapacious centralised system that the UK is structured upon. But what if this centralised system was dissembled, and in its place stood dozens of cities which grow and monopolise from their own energy? Rather than one dominant network, would a series of autonomous city-based energy systems not offer a mutually profitable alternative? Bio-Port is a utopian vision of a ‘Free Energy City’ set in Liverpool, where the old dockyards, redundant space, and the Mersey Estuary have been transformed into bio-productive algae farms. Bio-Port Free Energy City is a utopian ideal, where energy is superfluous; in fact so abundant that meters are obsolete. The city functions as an energy generator and thrives from its own product with minimal impact upon the planet it inhabits. Algaculture is the fundamental energy source, where a matrix of algae reactors swamp the abandoned dockyards; which themselves have been further expanded and reclaimed from the River Mersey. Each year, the algae farm is capable of producing over 200 million gallons of bio-fuel, which in-turn can produce enough electricity to power almost 2 million homes. The metabolism of Free-Energy City is circular and holistic, where the waste products of one process are simply the inputs of a new one. Livestock farming – once traditionally a high-carbon countryside exercise has become urbanised. Cattle are located alongside the algae matrix, and waste gases emitted by farmyards and livestock are largely sequestered by algal blooms or anaerobically converted to natural gas. Bio-Port Free Energy City mitigates the imbalances between ecology and urbanity, and exemplifies an environment where nature and the human machine can function productively and in harmony with one another. According to James Lovelock, our population has grown in number to the point where our presence is perceptibly disabling the planet, but in order to reverse the effects of our humanist flaws, it is vital that new eco-urban utopias are realised.
Resumo:
This paper proposes max separation clustering (MSC), a new non-hierarchical clustering method used for feature extraction from optical emission spectroscopy (OES) data for plasma etch process control applications. OES data is high dimensional and inherently highly redundant with the result that it is difficult if not impossible to recognize useful features and key variables by direct visualization. MSC is developed for clustering variables with distinctive patterns and providing effective pattern representation by a small number of representative variables. The relationship between signal-to-noise ratio (SNR) and clustering performance is highlighted, leading to a requirement that low SNR signals be removed before applying MSC. Experimental results on industrial OES data show that MSC with low SNR signal removal produces effective summarization of the dominant patterns in the data.
Resumo:
The initial part of this paper reviews the early challenges (c 1980) in achieving real-time silicon implementations of DSP computations. In particular, it discusses research on application specific architectures, including bit level systolic circuits that led to important advances in achieving the DSP performance levels then required. These were many orders of magnitude greater than those achievable using programmable (including early DSP) processors, and were demonstrated through the design of commercial digital correlator and digital filter chips. As is discussed, an important challenge was the application of these concepts to recursive computations as occur, for example, in Infinite Impulse Response (IIR) filters. An important breakthrough was to show how fine grained pipelining can be used if arithmetic is performed most significant bit (msb) first. This can be achieved using redundant number systems, including carry-save arithmetic. This research and its practical benefits were again demonstrated through a number of novel IIR filter chip designs which at the time, exhibited performance much greater than previous solutions. The architectural insights gained coupled with the regular nature of many DSP and video processing computations also provided the foundation for new methods for the rapid design and synthesis of complex DSP System-on-Chip (SoC), Intellectual Property (IP) cores. This included the creation of a wide portfolio of commercial SoC video compression cores (MPEG2, MPEG4, H.264) for very high performance applications ranging from cell phones to High Definition TV (HDTV). The work provided the foundation for systematic methodologies, tools and design flows including high-level design optimizations based on "algorithmic engineering" and also led to the creation of the Abhainn tool environment for the design of complex heterogeneous DSP platforms comprising processors and multiple FPGAs. The paper concludes with a discussion of the problems faced by designers in developing complex DSP systems using current SoC technology. © 2007 Springer Science+Business Media, LLC.
Resumo:
A novel high performance bit parallel architecture to perform square root and division is proposed. Relevant VLSI design issues have been addressed. By employing redundant arithmetic and a semisystolic schedule, the throughput has been made independent of the size of the array.
Resumo:
A high-performance VLSI architecture to perform multiply-accumulate, division and square root operations is proposed. The circuit is highly regular, requires only minimal control and can be pipelined right down to the bit level. The system can also be reconfigured on every cycle to perform any one of these operations. The gate count per row has been estimated at (27n+70) gate equivalents where n is the divisor wordlength. The throughput rate, which equals the clock speed, is the same for each operation and is independent of the wordlength. This is achieved through the combination of pipelining and redundant arithmetic. With a 1.0 µm CMOS technology and extensive pipelining, throughput rates in excess of 70 million operations per second are expected.