628 resultados para REDUNDANT


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Circadian clocks maintain robust and accurate timing over a broad range of physiological temperatures, a characteristic termed temperature compensation. In Arabidopsis thaliana, ambient temperature affects the rhythmic accumulation of transcripts encoding the clock components TIMING OF CAB EXPRESSION1 (TOC1), GIGANTEA (GI), and the partially redundant genes CIRCADIAN CLOCK ASSOCIATED1 (CCA1) and LATE ELONGATED HYPOCOTYL (LHY). The amplitude and peak levels increase for TOC1 and GI RNA rhythms as the temperature increases (from 17 to 27 degrees C), whereas they decrease for LHY. However, as temperatures decrease ( from 17 to 12 degrees C), CCA1 and LHY RNA rhythms increase in amplitude and peak expression level. At 27 degrees C, a dynamic balance between GI and LHY allows temperature compensation in wild-type plants, but circadian function is impaired in Ihy and gi mutant plants. However, at 12 degrees C, CCA1 has more effect on the buffering mechanism than LHY, as the cca1 and gi mutations impair circadian rhythms more than Ihy at the lower temperature. At 17 degrees C, GI is apparently dispensable for free-running circadian rhythms, although partial GI function can affect circadian period. Numerical simulations using the interlocking-loop model show that balancing LHY/CCA1 function against GI and other evening-expressed genes can largely account for temperature compensation in wild-type plants and the temperature-specific phenotypes of gi mutants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent terrorist events in the UK, such as the security alerts at British airports in August 2006 and the London bombings of July 2005 gained extensive media and academic analysis. This study contends, however, that much of the commentary demonstrated a wide degree of failure among government agencies, academic and analytic experts and the wider media, about the nature of the threat and continues to distort comprehension of the extant danger. The principal failure, this argument maintains, was, and continues to be, one of an asymmetry of comprehension that mistakes the still relatively limited means of violent jihadist radicals with limited political ends. The misapprehension often stems from the language that surrounds the idea of 'terrorism', which increasingly restricts debate to an intellectually redundant search for the 'root causes' that give rise to the politics of complacency. In recent times this outlook has consistently underestimated the level of the threat to the security of the UK. This article argues that a more realistic appreciation of the current security condition requires abandoning the prevailing view that the domestic threat is best prosecuted as a criminal conspiracy. It demands instead a total strategy to deal with a totalizing threat. The empirical evidence demonstrates the existence of a physical threat, not merely the political fear of threat. The implementation of a coherent set of social policies for confronting the threat at home recognizes that securing state borders and maintaining internal stability are the first tasks of government. Fundamentally, this requires a return to an understanding of the Hobbesian conditions for sovereignty, which, despite the delusions of post-Cold War cosmopolitan multiculturalism, never went away.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spatial data are particularly useful in mobile environments. However, due to the low bandwidth of most wireless networks, developing large spatial database applications becomes a challenging process. In this paper, we provide the first attempt to combine two important techniques, multiresolution spatial data structure and semantic caching, towards efficient spatial query processing in mobile environments. Based on the study of the characteristics of multiresolution spatial data (MSD) and multiresolution spatial query, we propose a new semantic caching model called Multiresolution Semantic Caching (MSC) for caching MSD in mobile environments. MSC enriches the traditional three-category query processing in semantic cache to five categories, thus improving the performance in three ways: 1) a reduction in the amount and complexity of the remainder queries; 2) the redundant transmission of spatial data already residing in a cache is avoided; 3) a provision for satisfactory answers before 100% query results have been transmitted to the client side. Our extensive experiments on a very large and complex real spatial database show that MSC outperforms the traditional semantic caching models significantly

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Non-technical losses (NTL) identification and prediction are important tasks for many utilities. Data from customer information system (CIS) can be used for NTL analysis. However, in order to accurately and efficiently perform NTL analysis, the original data from CIS need to be pre-processed before any detailed NTL analysis can be carried out. In this paper, we propose a feature selection based method for CIS data pre-processing in order to extract the most relevant information for further analysis such as clustering and classifications. By removing irrelevant and redundant features, feature selection is an essential step in data mining process in finding optimal subset of features to improve the quality of result by giving faster time processing, higher accuracy and simpler results with fewer features. Detailed feature selection analysis is presented in the paper. Both time-domain and load shape data are compared based on the accuracy, consistency and statistical dependencies between features.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Freshwater is extremely precious; but even more precious than freshwater is clean freshwater. From the time that 2/3 of our planet is covered in water, we have contaminated our globe with chemicals that have been used by industrial activities over the last century in a unprecedented way causing harm to humans and wildlife. We have to adopt a new scientific mindset in order to face this problem so to protect this important resource. The Water Framework Directive (European Parliament and the Council, 2000) is a milestone legislative document that transformed the way that water quality monitoring is undertaken across all Member States by introducing the Ecological and Chemical Status. A “good or higher” Ecological Status is expected to be achieved for all waterbodies in Europe by 2015. Yet, most of the European waterbodies, which are determined to be at risk, or of moderate to bad quality, further information will be required so that adequate remediation strategies can be implemented. To date, water quality evaluation is based on five biological components (phytoplankton, macrophytes and benthic algae, macroinvertebrates and fishes) and various hydromorphological and physicochemical elements. The evaluation of the chemical status is principally based on 33 priority substances and on 12 xenobiotics, considered as dangerous for the environment. This approach takes into account only a part of the numerous xenobiotics that can be present in surface waters and could not evidence all the possible causes of ecotoxicological stress that can act in a water section. The mixtures of toxic chemicals may constitute an ecological risk not predictable on the basis of the single component concentration. To improve water quality, sources of contamination and causes of ecological alterations need to be identified. On the other hand, the analysis of the community structure, which is the result of multiple processes, including hydrological constrains and physico-chemical stress, give back only a “photograph” of the actual status of a site without revealing causes and sources of the perturbation. A multidisciplinary approach, able to integrate the information obtained by different methods, such as community structure analysis and eco-genotoxicological studies, could help overcome some of the difficulties in properly identifying the different causes of stress in risk assessment. In synthesis, the river ecological status is the result of a combination of multiple pressures that, for management purposes and quality improvement, have to be disentangled from each other. To reduce actual uncertainty in risk assessment, methods that establish quantitative links between levels of contamination and community alterations are needed. The analysis of macrobenthic invertebrate community structure has been widely used to identify sites subjected to perturbation. Trait-based descriptors of community structure constitute a useful method in ecological risk assessment. The diagnostic capacity of freshwater biomonitoring could be improved by chronic sublethal toxicity testing of water and sediment samples. Requiring an exposure time that covers most of the species’ life cycle, chronic toxicity tests are able to reveal negative effects on life-history traits at contaminant concentrations well below the acute toxicity level. Furthermore, the responses of high-level endpoints (growth, fecundity, mortality) can be integrated in order to evaluate the impact on population’s dynamics, a highly relevant endpoint from the ecological point of view. To gain more accurate information about potential causes and consequences of environmental contamination, the evaluation of adverse effects at physiological, biochemical and genetic level is also needed. The use of different biomarkers and toxicity tests can give information about the sub-lethal and toxic load of environmental compartments. Biomarkers give essential information about the exposure to toxicants, such as endocrine disruptor compounds and genotoxic substances whose negative effects cannot be evidenced by using only high-level toxicological endpoints. The increasing presence of genotoxic pollutants in the environment has caused concern regarding the potential harmful effects of xenobiotics on human health, and interest on the development of new and more sensitive methods for the assessment of mutagenic and cancerogenic risk. Within the WFD, biomarkers and bioassays are regarded as important tools to gain lines of evidence for cause-effect relationship in ecological quality assessment. Despite the scientific community clearly addresses the advantages and necessity of an ecotoxicological approach within the ecological quality assessment, a recent review reports that, more than one decade after the publication of the WFD, only few studies have attempted to integrate ecological water status assessment and biological methods (namely biomarkers or bioassays). None of the fifteen reviewed studies included both biomarkers and bioassays. The integrated approach developed in this PhD Thesis comprises a set of laboratory bioassays (Daphnia magna acute and chronic toxicity tests, Comet Assay and FPG-Comet) newly-developed, modified tacking a cue from standardized existing protocols or applied for freshwater quality testing (ecotoxicological, genotoxicological and toxicogenomic assays), coupled with field investigations on macrobenthic community structures (SPEAR and EBI indexes). Together with the development of new bioassays with Daphnia magna, the feasibility of eco-genotoxicological testing of freshwater and sediment quality with Heterocypris incongruens was evaluated (Comet Assay and a protocol for chronic toxicity). However, the Comet Assay, although standardized, was not applied to freshwater samples due to the lack of sensitivity of this species observed after 24h of exposure to relatively high (and not environmentally relevant) concentrations of reference genotoxicants. Furthermore, this species demonstrated to be unsuitable also for chronic toxicity testing due to the difficult evaluation of fecundity as sub-lethal endpoint of exposure and complications due to its biology and behaviour. The study was applied to a pilot hydrographic sub-Basin, by selecting section subjected to different levels of anthropogenic pressure: this allowed us to establish the reference conditions, to select the most significant endpoints and to evaluate the coherence of the responses of the different lines of evidence (alteration of community structure, eco-genotoxicological responses, alteration of gene expression profiles) and, finally, the diagnostic capacity of the monitoring strategy. Significant correlations were found between the genotoxicological parameter Tail Intensity % (TI%) and macrobenthic community descriptors SPEAR (p<0.001) and EBI (p<0.05), between the genotoxicological parameter describing DNA oxidative stress (ΔTI%) and mean levels of nitrates (p<0.01) and between reproductive impairment (Failed Development % from D. magna chronic bioassays) and TI% (p<0.001) as well as EBI (p<0.001). While correlation among parameters demonstrates a general coherence in the response to increasing impacts, the concomitant ability of each single endpoint to be responsive to specific sources of stress is at the basis of the diagnostic capacity of the integrated approach as demonstrated by stations presenting a mismatch among the different lines of evidence. The chosen set of bioassays, as well as the selected endpoints, are not providing redundant indications on the water quality status but, on the contrary, are contributing with complementary pieces of information about the several stressors that insist simultaneously on a waterbody section providing this monitoring strategy with a solid diagnostic capacity. Our approach should provide opportunities for the integration of biological effects into monitoring programmes for surface water, especially in investigative monitoring. Moreover, it should provide a more realistic assessment of impact and exposure of aquatic organisms to contaminants. Finally this approach should provide an evaluation of drivers of change in biodiversity and its causalities on ecosystem function/services provision, that is the direct and indirect contributions to human well-being.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a general methodology for estimating and incorporating uncertainty in the controller and forward models for noisy nonlinear control problems. Conditional distribution modeling in a neural network context is used to estimate uncertainty around the prediction of neural network outputs. The developed methodology circumvents the dynamic programming problem by using the predicted neural network uncertainty to localize the possible control solutions to consider. A nonlinear multivariable system with different delays between the input-output pairs is used to demonstrate the successful application of the developed control algorithm. The proposed method is suitable for redundant control systems and allows us to model strongly non Gaussian distributions of control signal as well as processes with hysteresis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the direct adaptive inverse control of nonlinear multivariable systems with different delays between every input-output pair. In direct adaptive inverse control, the inverse mapping is learned from examples of input-output pairs. This makes the obtained controller sub optimal, since the network may have to learn the response of the plant over a larger operational range than necessary. Moreover, in certain applications, the control problem can be redundant, implying that the inverse problem is ill posed. In this paper we propose a new algorithm which allows estimating and exploiting uncertainty in nonlinear multivariable control systems. This approach allows us to model strongly non-Gaussian distribution of control signals as well as processes with hysteresis. The proposed algorithm circumvents the dynamic programming problem by using the predicted neural network uncertainty to localise the possible control solutions to consider.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work describes the programme of activities relating to a mechanical study of the Conform extrusion process. The main objective was to provide a basic understanding of the mechanics of the Conform process with particular emphasis placed on modelling using experimental and theoretical considerations. The experimental equipment used includes a state of the art computer-aided data-logging system and high temperature loadcells (up to 260oC) manufactured from tungsten carbide. Full details of the experimental equipment is presented in sections 3 and 4. A theoretical model is given in Section 5. The model presented is based on the upper bound theorem using a variation of the existing extrusion theories combined with temperature changes in the feed metal across the deformation zone. In addition, constitutive equations used in the model have been generated from existing experimental data. Theoretical and experimental data are presented in tabular form in Section 6. The discussion of results includes a comprehensive graphical presentation of the experimental and theoretical data. The main findings are: (i) the establishment of stress/strain relationships and an energy balance in order to study the factors affecting redundant work, and hence a model suitable for design purposes; (ii) optimisation of the process, by determination of the extrusion pressure for the range of reduction and changes in the extrusion chamber geometry at lower wheel speeds; and (iii) an understanding of the control of the peak temperature reach during extrusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: To design and validate a vision-specific quality-of-life assessment tool to be used in a clinical setting to evaluate low-vision rehabilitation strategy and management. METHODS: Previous vision-related questionnaires were assessed by low-vision rehabilitation professionals and patients for relevance and coverage. The 74 items selected were pretested to ensure correct interpretation. One hundred and fifty patients with low vision completed the chosen questions on four occasions to allow the selection of the most appropriate items. The vision-specific quality of life of patients with low vision was compared with that of 70 age-matched and gender-matched patients with normal vision and before and after low-vision rehabilitation in 278 patients. RESULTS: Items that were unreliable, internally inconsistent, redundant, or not relevant were excluded, resulting in the 25-item Low Vision Quality-of-Life Questionnaire (LVQOL). Completion of the LVQOL results in a summed score between 0 (a low quality of life) and 125 (a high quality of life). The LVQOL has a high internal consistency (α = 0.88) and good reliability (0.72). The average LVQOL score for a population with low vision (60.9 ± 25.1) was significantly lower than the average score of those with normal vision (100.3 ± 20.8). Rehabilitation improved the LVQOL score of those with low vision by an average of 6.8 ± 15.6 (17%). CONCLUSIONS: The LVQOL was shown to be an internally consistent, reliable, and fast method for measuring the vision-specific quality of life of the visually impaired in a clinical setting. It is able to quantify the quality of life of those with low vision and is useful in determining the effects of low-vision rehabilitation. Copyright (C) 2000 Elsevier Science Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives: The Competitive Aggressiveness and Anger Scale (CAAS) was developed to measure antecedents of aggression in sport. The critique attacks the CAAS on three points: (1) the definition of aggression in sport adopted, (2) the‘‘one size fits all’’ element in the thinking behind the scale’s development, (3) the nature of the CAAS Anger and Aggressiveness items. The objectives of this response is to address misunderstandings in the critique. Methods: We identified a number of false assumptions that undermine the validity of the critique and attempt to clarify our position with respect to the criticisms made. Results: (1) The CAAS is being criticised for a definition that it did not use. (2) We accepted that the CAAS may not be suitable for everyone in our limitations section and fully accept the limitations of any scale. We have since undertaken a large research project to establish whether the scale is valid across and within specific sports. (3) The fundamental misunderstanding inherent throughout the critique is that the CAAS was designed as a measure of aggression, rather than anger and aggressiveness, rendering the critique of its items redundant. Conclusions: The critique misrepresents the authors of the CAAS and fails to present a coherent argument against its use. We hope to clarify our position here. The evidence to date suggests that the CAAS is a valid measure of anger and aggressiveness in many sports and that these concepts reliably differentiate players who admit unsanctioned aggression from those who do not.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Amino acid substitution plays a vital role in both the molecular engineering of proteins and analysis of structure-activity relationships. High-throughput substitution is achieved by codon randomisation, which generates a library of mutants (a randomised gene library) in a single experiment. For full randomisation, key codons are typically replaced with NNN (64 sequences) or NNG CorT (32 sequences). This obligates cloning of redundant codons alongside those required to encode the 20 amino acids. As the number of randomised codons increases, there is therefore a progressive loss of randomisation efficiency; the number of genes required per protein rises exponentially. The redundant codons cause amino acids to be represented unevenly; for example, methionine is encoded just once within NNN, whilst arginine is encoded six times. Finally, the organisation of the genetic code makes it impossible to encode functional subsets of amino acids (e.g. polar residues only) in a single experiment. Here, we present a novel solution to randomisation where genetic redundancy is eliminated; the number of different genes equals the number of encoded proteins, regardless of codon number. There is no inherent amino acid bias and any required subset of amino acids may be encoded in one experiment. This generic approach should be widely applicable in studies involving randomisation of proteins. © 2003 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis sets out to investigate the role of cohesion in the organisation and processing of three text types in English and Arabic. In other words, it attempts to shed some light on the descriptive and explanatory power of cohesion in different text typologies. To this effect, three text types, namely, literary fictional narrative, newspaper editorial and science were analysed to ascertain the intra- and inter-sentential trends in textual cohesion characteristic of each text type in each language. In addition, two small scale experiments which aimed at exploring the facilitatory effect of one cohesive device (i.e. lexical repetition) on the comprehension of three English text types by Arab learners were carried out. The first experiment examined this effect in an English science text; the second covered three English text types, i.e. fictional narrative, culturally-oriented and science. Some interesting and significant results have emerged from the textual analysis and the pilot studies. Most importantly, each text type tends to utilize the cohesive trends that are compatible with its readership, reader knowledge, reading style and pedagogical purpose. Whereas fictional narratives largely cohere through pronominal co-reference, editorials and science texts derive much cohesion from lexical repetition. As for cross-language differences English opts for economy in the use of cohesive devices, while Arabic largely coheres through the redundant effect created by the high frequency of most of those devices. Thus, cohesion is proved to be a variable rather than a homogeneous phenomenon which is dictated by text type among other factors. The results of the experiments suggest that lexical repetition does facilitate the comprehension of English texts by Arab learners. Fictional narratives are found to be easier to process and understand than expository texts. Consequently, cohesion can assist in the processing of text as it can in its creation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aims to investigate to what extent the views of the managers of the enterprises to be privatized are a barrier to smooth implementation of privatization as opposed to other problems. Accordingly, the research tackles two main issues: Identification and analysis of the major problems encountered in the implementation of the Egyptian privatization programme and at which level these problems exist while proposing different approaches to tackle them; and views of public sector top and middle-level managers regarding the main issues of privatization. The study relies upon a literature survey, interviews with stakeholders, a survey of managers' attitudes and several illustrative case studies. A model of "good practice" for the smooth and effective implementation of privatization has been designed. Practice in Egypt has then been studied and compared with the "good practice" model. Lack of strictness and firmness in implementing the announced privatization programme has been found to be a characteristic of Egyptian practice. This is partly attributable to the inadequacy of the programme and partly to the different obstacles to implementation. The main obstacles are doubtful desirability of privatization on the part of the managers at different implementation levels, resistance of stakeholders, in adequately of the legal framework governing privatization, redundant labour, lack of an efficient monitoring system allowing for accountability, inefficient marketing of privatization, ineffective communication, insufficient information at different levels and problems related to valuation and selling procedures. A large part of the thesis is concerned with SOE (State Owned Enterprise) managers' attitudes on and understanding of the privatization (appraised through surveys). Although most managers have stated their acceptance of privatization, many of their responses show that they do not accept selling SOEs. They understand privatization to include enterprise reform and restructuring, changing procedures and giving more authority to company executives, but not necessarily as selling SOEs. The majority of managers still see many issues that have to be addressed for smooth implementation of privatization e.g. insufficiency of information, incompleteness of legal framework, restructuring and labour problems. The main contribution to knowledge of this thesis is the study of problems of implementing privatization in developing countries especially managers' resistance to privatization as a major change, partly because of the threat it poses and partly because of the lack of understanding of privatization and implications of operating private businesses. A programme of persuading managers and offsetting the unfavourable effects is recommended as an outcome of the study. Five different phrases and words for the national Index to theses are: Egypt, privatization, implementation of privatization, problems of implementing privatization and managers' attitudes towards privatization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During the last decade the use of randomised gene libraries has had an enormous impact in the field of protein engineering. Such libraries comprise many variations of a single gene in which codon replacements are used to substitute key residues of the encoded protein. The expression of such libraries generates a library of randomised proteins which can subsequently be screened for desired or novel activities. Randomisation in this fashion has predominantly been achieved by the inclusion of the codons NNN or NNGCor T, in which N represents any of the four bases A,C,G, or T. The use of thesis codons however, necessities the cloning of redundant codons at each position of randomisation, in addition to those required to encode the twenty possible amino acid substitutions. As degenerate codons must be included at each position of randomisation, this results in a progressive loss of randomisation efficiency as the number of randomised positions is increased. The ratio of genes to proteins in these libraries rises exponentially with each position of randomisation, creating large gene libraries, which generate protein libraries of limited diversity upon expression. In addition to these problems of library size, the cloning of redundant codons also results in the generation of protein libraries in which substituted amino acids are unevenly represented. As several of the randomised codons may encode the same amino acid, for example serine which is encoded six time using the codon NNN, an inherent bias may be introduced into the resulting protein library during the randomisation procedure. The work outlined here describes the development of a novel randomisation technique aimed at a eliminating codon redundancy from randomised gene libraries, thus addressing the problems of library size and bias, associated with the cloning of redundant codons.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have simulated the performance of various apertures used in Coded Aperture Imaging - optically. Coded pictures of extended and continuous-tone planar objects from the Annulus, Twin Annulus, Fresnel Zone Plate and the Uniformly Redundant Array have been decoded using a noncoherent correlation process. We have compared the tomographic capabilities of the Twin Annulus with the Uniformly Redundant Arrays based on quadratic residues and m-sequences. We discuss the ways of reducing the 'd. c.' background of the various apertures used. The non-ideal System-Point-Spread-Function inherent in a noncoherent optical correlation process produces artifacts in the reconstruction. Artifacts are also introduced as a result of unwanted cross-correlation terms from out-of-focus planes. We find that the URN based on m-sequences exhibits good spatial resolution and out-of-focus behaviour when imaging extended objects.