39 resultados para higher level collective agrrement
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Abstract Adaptability to changing circumstances is a key feature of living creatures. Understanding such adaptive processes is central to developing successful autonomous artifacts. In this paper two perspectives are brought to bear on the issue of adaptability. The first is a short term perspective which looks at adaptability in terms of the interactions between the agent and the environment. The second perspective involves a hierarchical evolutionary model which seeks to identify higher-order forms of adaptability based on the concept of adaptive meta-constructs. Task orientated and agent-centered models of adaptive processes in artifacts are considered from these two perspectives. The former isrepresented by the fitness function approach found in evolutionary learning, and the latter in terms of the concepts of empowerment and homeokinesis found in models derived from the self-organizing systems approach. A meta-construct approach to adaptability based on the identification of higher level meta-metrics is also outlined. 2009 Published by Elsevier B.V.
Resumo:
The work presented is concerned with the estimation of manufacturing cost at the concept design stage, when little technical information is readily available. The work focuses on the nose cowl sections of a wide range of engine nacelles built at Bombardier Aerospace Shorts of Belfast. A core methodology is presented that: defines manufacturing cost elements that are prominent; utilises technical parameters that are highly influential in generating those costs; establishes the linkage between these two; and builds the associated cost estimating relations into models. The methodology is readily adapted to deal with both the early and more mature conceptual design phases, which thereby highlights the generic, flexible and fundamental nature of the method. The early concept cost model simplifies cost as a cumulative element that can be estimated using higher level complexity ratings, while the mature concept cost model breaks manufacturing cost down into a number of constituents that are each driven by their own specific drivers. Both methodologies have an average error of less that ten percent when correlated with actual findings, thus achieving an acceptable level of accuracy. By way of validity and application, the research is firmly based on industrial case studies and practice and addresses the integration of design and manufacture through cost. The main contribution of the paper is the cost modelling methodology. The elemental modelling of the cost breakdown structure through materials, part fabrication, assembly and their associated drivers is relevant to the analytical design procedure, as it utilises design definition and complexity that is understood by engineers.
Resumo:
The primary intention of this paper is to review the current state of the art in engineering cost modelling as applied to aerospace. This is a topic of current interest and in addressing the literature, the presented work also sets out some of the recognised definitions of cost that relate to the engineering domain. The paper does not attempt to address the higher-level financial sector but rather focuses on the costing issues directly relevant to the engineering process, primarily those of design and manufacture. This is of more contemporary interest as there is now a shift towards the analysis of the influence of cost, as defined in more engineering related terms; in an attempt to link into integrated product and process development (IPPD) within a concurrent engineering environment. Consequently, the cost definitions are reviewed in the context of the nature of cost as applicable to the engineering process stages: from bidding through to design, to manufacture, to procurement and ultimately, to operation. The linkage and integration of design and manufacture is addressed in some detail. This leads naturally to the concept of engineers influencing and controlling cost within their own domain rather than trusting this to financers who have little control over the cause of cost. In terms of influence, the engineer creates the potential for cost and in a concurrent environment this requires models that integrate cost into the decision making process.
Resumo:
The paper is primarily concerned with the modelling of aircraft manufacturing cost. The aim is to establish an integrated life cycle balanced design process through a systems engineering approach to interdisciplinary analysis and control. The cost modelling is achieved using the genetic causal approach that enforces product family categorisation and the subsequent generation of causal relationships between deterministic cost components and their design source. This utilises causal parametric cost drivers and the definition of the physical architecture from the Work Breakdown Structure (WBS) to identify product families. The paper presents applications to the overall aircraft design with a particular focus on the fuselage as a subsystem of the aircraft, including fuselage panels and localised detail, as well as engine nacelles. The higher level application to aircraft requirements and functional analysis is investigated and verified relative to life cycle design issues for the relationship between acquisition cost and Direct Operational Cost (DOC), for a range of both metal and composite subsystems. Maintenance is considered in some detail as an important contributor to DOC and life cycle cost. The lower level application to aircraft physical architecture is investigated and verified for the WBS of an engine nacelle, including a sequential build stage investigation of the materials, fabrication and assembly costs. The studies are then extended by investigating the acquisition cost of aircraft fuselages, including the recurring unit cost and the non-recurring design cost of the airframe sub-system. The systems costing methodology is facilitated by the genetic causal cost modeling technique as the latter is highly generic, interdisciplinary, flexible, multilevel and recursive in nature, and can be applied at the various analysis levels required of systems engineering. Therefore, the main contribution of paper is a methodology for applying systems engineering costing, supported by the genetic causal cost modeling approach, whether at a requirements, functional or physical level.
Resumo:
BACKGROUND: Although severe encephalopathy has been proposed as a possible contraindication to the use of noninvasive positive-pressure ventilation (NPPV), increasing clinical reports showed it was effective in patients with impaired consciousness and even coma secondary to acute respiratory failure, especially hypercapnic acute respiratory failure (HARF). To further evaluate the effectiveness and safety of NPPV for severe hypercapnic encephalopathy, a prospective case-control study was conducted at a university respiratory intensive care unit (RICU) in patients with acute exacerbation of chronic obstructive pulmonary disease (AECOPD) during the past 3 years. METHODS: Forty-three of 68 consecutive AECOPD patients requiring ventilatory support for HARF were divided into 2 groups, which were carefully matched for age, sex, COPD course, tobacco use and previous hospitalization history, according to the severity of encephalopathy, 22 patients with Glasgow coma scale (GCS) <10 served as group A and 21 with GCS = 10 as group B. RESULTS: Compared with group B, group A had a higher level of baseline arterial partial CO2 pressure ((102 +/- 27) mmHg vs (74 +/- 17) mmHg, P <0.01), lower levels of GCS (7.5 +/- 1.9 vs 12.2 +/- 1.8, P <0.01), arterial pH value (7.18 +/- 0.06 vs 7.28 +/- 0.07, P <0.01) and partial O(2) pressure/fraction of inspired O(2) ratio (168 +/- 39 vs 189 +/- 33, P <0.05). The NPPV success rate and hospital mortality were 73% (16/22) and 14% (3/22) respectively in group A, which were comparable to those in group B (68% (15/21) and 14% (3/21) respectively, all P > 0.05), but group A needed an average of 7 cm H2O higher of maximal pressure support during NPPV, and 4, 4 and 7 days longer of NPPV time, RICU stay and hospital stay respectively than group B (P <0.05 or P <0.01). NPPV therapy failed in 12 patients (6 in each group) because of excessive airway secretions (7 patients), hemodynamic instability (2), worsening of dyspnea and deterioration of gas exchange (2), and gastric content aspiration (1). CONCLUSIONS: Selected patients with severe hypercapnic encephalopathy secondary to HARF can be treated as effectively and safely with NPPV as awake patients with HARF due to AECOPD; a trial of NPPV should be instituted to reduce the need of endotracheal intubation in patients with severe hypercapnic encephalopathy who are otherwise good candidates for NPPV due to AECOPD.
Resumo:
Extending the work presented in Prasad et al. (IEEE Proceedings on Control Theory and Applications, 147, 523-37, 2000), this paper reports a hierarchical nonlinear physical model-based control strategy to account for the problems arising due to complex dynamics of drum level and governor valve, and demonstrates its effectiveness in plant-wide disturbance handling. The strategy incorporates a two-level control structure consisting of lower-level conventional PI regulators and a higher-level nonlinear physical model predictive controller (NPMPC) for mainly set-point manoeuvring. The lower-level PI loops help stabilise the unstable drum-boiler dynamics and allow faster governor valve action for power and grid-frequency regulation. The higher-level NPMPC provides an optimal load demand (or set-point) transition by effective handling of plant-wide interactions and system disturbances. The strategy has been tested in a simulation of a 200-MW oil-fired power plant at Ballylumford in Northern Ireland. A novel approach is devized to test the disturbance rejection capability in severe operating conditions. Low frequency disturbances were created by making random changes in radiation heat flow on the boiler-side, while condenser vacuum was fluctuating in a random fashion on the turbine side. In order to simulate high-frequency disturbances, pulse-type load disturbances were made to strike at instants which are not an integral multiple of the NPMPC sampling period. Impressive results have been obtained during both types of system disturbances and extremely high rates of load changes, right across the operating range, These results compared favourably with those from a conventional state-space generalized predictive control (GPC) method designed under similar conditions.
Resumo:
Background
Interaction of a drug or chemical with a biological system can result in a gene-expression profile or signature characteristic of the event. Using a suitably robust algorithm these signatures can potentially be used to connect molecules with similar pharmacological or toxicological properties by gene expression profile. Lamb et al first proposed the Connectivity Map [Lamb et al (2006), Science 313, 1929–1935] to make successful connections among small molecules, genes, and diseases using genomic signatures.
Results
Here we have built on the principles of the Connectivity Map to present a simpler and more robust method for the construction of reference gene-expression profiles and for the connection scoring scheme, which importantly allows the valuation of statistical significance of all the connections observed. We tested the new method with two randomly generated gene signatures and three experimentally derived gene signatures (for HDAC inhibitors, estrogens, and immunosuppressive drugs, respectively). Our testing with this method indicates that it achieves a higher level of specificity and sensitivity and so advances the original method.
Conclusion
The method presented here not only offers more principled statistical procedures for testing connections, but more importantly it provides effective safeguard against false connections at the same time achieving increased sensitivity. With its robust performance, the method has potential use in the drug development pipeline for the early recognition of pharmacological and toxicological properties in chemicals and new drug candidates, and also more broadly in other 'omics sciences.
Resumo:
PURPOSE. Advanced glycation end products (AGES) form irreversible cross- links with many macromolecules and have been shown to accumulate in tissues at an accelerated rate in diabetes. In the present study, AGE formation in vitreous was examined in patients of various ages and in patients with diabetes. Ex vivo investigations were performed on bovine vitreous incubated in glucose to determine AGE formation and cross-linking of vitreous collagen. METHODS. By means of an AGE-specific enzyme-linked immunosorbent assay (ELISA), AGE formation was investigated in vitreous samples obtained after pars plana vitrectomy in patients with and without diabetes. In addition, vitreous AGES were investigated in bovine vitreous collagen after incubation in high glucose, high glucose with aminoguanidine, or normal saline for as long as 8 weeks. AGEs and AGE cross-linking was subsequently determined by quantitative and qualitative assays. RESULTS. There was a significant correlation between AGEs and increasing age in patients without diabetes (r = 0.74). Furthermore, a comparison between age-matched diabetic and nondiabetic vitreous showed a significantly higher level of AGEs in the patients with diabetes (P < 0.005). Collagen purified from bovine vitreous incubated in 0.5 M glucose showed an increase in AGE formation when observed in dot blot analysis, immunogold labeling, and AGE ELISA. Furthermore, there was increased cross-linking of collagen in the glucose-incubated vitreous, when observed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and protein separation. This cross-linking was effectively inhibited by coincubation with 10 mM aminoguanidine. CONCLUSIONS. This study suggests that AGEs may form in vitreous with increasing age. This process seems to be accelerated in the presence of diabetes and as a consequence of exposure to high glucose. Advanced glycation and AGE cross-linking of the vitreous collagen network may help to explain the vitreous abnormalities characteristic of diabetes.
Resumo:
For many applications of emotion recognition, such as virtual agents, the system must select responses while the user is speaking. This requires reliable on-line recognition of the user’s affect. However most emotion recognition systems are based on turnwise processing. We present a novel approach to on-line emotion recognition from speech using Long Short-Term Memory Recurrent Neural Networks. Emotion is recognised frame-wise in a two-dimensional valence-activation continuum. In contrast to current state-of-the-art approaches, recognition is performed on low-level signal frames, similar to those used for speech recognition. No statistical functionals are applied to low-level feature contours. Framing at a higher level is therefore unnecessary and regression outputs can be produced in real-time for every low-level input frame. We also investigate the benefits of including linguistic features on the signal frame level obtained by a keyword spotter.
A Comparison of the Flow Structures and Losses Within Vaned and Vaneless Stators for Radial Turbines
Resumo:
This paper details the numerical analysis of different vaned and vaneless radial inflow turbine stators. Selected results are presented from a test program carried out to determine performance differences between the radial turbines with vaned stators and vaneless volutes under the same operating conditions. A commercial computational fluid dynamics code was used to develop numerical models of each of the turbine configurations, which were validated using the experimental results. From the numerical models, areas of loss generation in the different stators were identified and compared, and the stator losses were quantified. Predictions showed the vaneless turbine stators to incur lower losses than the corresponding vaned stator at matching operating conditions, in line with the trends in measured performance.. Flow conditions at rotor inlet were studied and validated with internal static pressure measurements so as to judge the levels of circumferential nonuniformity for each stator design. In each case, the vaneless volutes were found to deliver a higher level of uniformity in the rotor inlet pressure field. [DOI: 10.1115/1.2988493]
Resumo:
The better models of e-Gov posit high levels of informational communication between citizen and state. Unfortunately, in one area, that communication has traditionally been poor: that is, access to sources of law. There have been a number of reasons for this, but a primary one has been that law was historically mediated for the citizen by the legal profession. This situation is changing with ever increasing numbers of unrepresented litigants being involved at all levels of national court systems in each and every country as well as a generally higher level of intrusion of legislation into everyday home and business life. There have been attempts to improve access through internet based services, but these have improved communication (‘understanding of law’) to only a limited extent. It may be time, this article suggests, to consider re-engineering legal sources so that they better fit the needs of eGov.
Resumo:
Using data from the 2002 and 2009 Northern Ireland Life and Times (NILT) surveys, we examine attitudes towards immigrant and ethnic minority groups in Northern Ireland. We suggest that Protestant and unionist communities experience a higher level of cultural threat than Catholic and nationalist communities on account of the ‘parity of esteem’ principle that has informed changes in the province since the Belfast Agreement of 1998. Our analyses confirm that, while there is evidence for some level of anti-immigrant sentiment across all groups, Protestants and unionists do indeed report relatively more negative attitudes towards a range of immigrant and ethnic target groups compared to Catholic, nationalist, or respondents who do not identify with either religious or political category. The analyses further suggest that their higher level of perceived cultural threat partially accounts for this difference. We suggest that cultural threat can be interpreted as a response to changes in Northern Ireland that have challenged the dominant status enjoyed by Protestants and unionists in the past.
Resumo:
The prevalence of multicore processors is bound to drive most kinds of software development towards parallel programming. To limit the difficulty and overhead of parallel software design and maintenance, it is crucial that parallel programming models allow an easy-to-understand, concise and dense representation of parallelism. Parallel programming models such as Cilk++ and Intel TBBs attempt to offer a better, higher-level abstraction for parallel programming than threads and locking synchronization. It is not straightforward, however, to express all patterns of parallelism in these models. Pipelines are an important parallel construct, although difficult to express in Cilk and TBBs in a straightfor- ward way, not without a verbose restructuring of the code. In this paper we demonstrate that pipeline parallelism can be easily and concisely expressed in a Cilk-like language, which we extend with input, output and input/output dependency types on procedure arguments, enforced at runtime by the scheduler. We evaluate our implementation on real applications and show that our Cilk-like scheduler, extended to track and enforce these dependencies has performance comparable to Cilk++.
Resumo:
Certain policy areas with considerable impact on young people's educational experiences and achievements, notably assessment and qualifications, do not involve consultation with young people to any meaningful extent. Findings from a national study, which included focus groups with 243 students in the 14-19 phase, are presented with respect to student consultation and participation in such policy areas. A lack of meaningful consultation regarding what students see as ‘higher level’ policy agendas was found (such as qualifications provision, choice or structure). Students are therefore ‘voiceless’ in relation to major qualifications reforms
Resumo:
Multiple-cue probability learning (MCPL) involves learning to predict a criterion when outcome feedback is provided for multiple cues. A great deal of research suggests that working memory capacity (WMC) is involved in a wide range of tasks that draw on higher level cognitive processes. In three experiments, we examined the role of WMC in MCPL by introducing measures of working memory capacity, as well as other task manipulations. While individual differences in WMC positively predicted performance in some kinds of multiple-cue tasks, performance on other tasks was entirely unrelated to these differences. Performance on tasks that contained negative cues was correlated with working memory capacity, as well as measures of explicit knowledge obtained in the learning process. When the relevant cues predicted positively, however, WMC became irrelevant. The results are discussed in terms of controlled and automatic processes in learning and judgement. © 2011 The Experimental Psychology Society.