857 resultados para Higher level education
Resumo:
The Leaving Certificate (LC) is the national, standardised state examination in Ireland necessary for entry to third level education – this presents a massive, raw corpus of data with the potential to yield invaluable insight into the phenomena of learner interlanguage. With samples of official LC Spanish examination data, this project has compiled a digitised corpus of learner Spanish comprised of the written and oral production of 100 candidates. This corpus was then analysed using a specific investigative corpus technique, Computer-aided Error Analysis (CEA, Dagneaux et al, 1998). CEA is a powerful apparatus in that it greatly facilitates the quantification and analysis of a large learner corpus in digital format. The corpus was both compiled and analysed with the use of UAM Corpus Tool (O’Donnell 2013). This Tool allows for the recording of candidate-specific variables such as grade, examination level, task type and gender, therefore allowing for critical analysis of the corpus as one unit, as separate written and oral sub corpora and also of performance per task, level and gender. This is an interdisciplinary work combining aspects of Applied Linguistics, Learner Corpus Research and Foreign Language (FL) Learning. Beginning with a review of the context of FL learning in Ireland and Europe, I go on to discuss the disciplinary context and theoretical framework for this work and outline the methodology applied. I then perform detailed quantitative and qualitative analyses before going on to combine all research findings outlining principal conclusions. This investigation does not make a priori assumptions about the data set, the LC Spanish examination, the context of FLs or of any aspect of learner competence. It undertakes to provide the linguistic research community and the domain of Spanish language learning and pedagogy in Ireland with an empirical, descriptive profile of real learner performance, characterising learner difficulty.
Resumo:
Corporate bond appeared early in 1992-1994 in Vietnamese capital markets. However, it is still not popular to both business sector and academic circle. This paper explores different dimensions of Vietnamese corporate bond market using a unique, and perhaps, most complete dataset. State not only intervenes in the bond markets with its powerful budget and policies but also competes directly with enterprises. The dominance of SOEs and large corporations also prevents SMEs from this debt financing vehicle. Whenever a convertible term is available, bondholders are more willing to accept lower fixed income payoff. But they would not likely stick to it. On one hand, prospective bondholders could value the holdings of equity when realized favorably ex ante. On the other hand, the applicable coupon rate for such bond could turn out negative inflationadjusted payoff when tight monetary policy is exercised and the corresponding equity holding turns out valueless, ex post. Given the weak primary market and virtually nonexistent secondary market, the corporate bond market in Vietnam reflects our perception of the relationship-based and rent-seeking behavior in the financial markets. For the corporate bonds to really work, they critically need a higher level of liquidity to become truly tradable financial assets.
Resumo:
Oxidative skeletal muscles are more resistant than glycolytic muscles to cachexia caused by chronic heart failure and other chronic diseases. The molecular mechanism for the protection associated with oxidative phenotype remains elusive. We hypothesized that differences in reactive oxygen species (ROS) and nitric oxide (NO) determine the fiber type susceptibility. Here, we show that intraperitoneal injection of endotoxin (lipopolysaccharide, LPS) in mice resulted in higher level of ROS and greater expression of muscle-specific E3 ubiqitin ligases, muscle atrophy F-box (MAFbx)/atrogin-1 and muscle RING finger-1 (MuRF1), in glycolytic white vastus lateralis muscle than in oxidative soleus muscle. By contrast, NO production, inducible NO synthase (iNos) and antioxidant gene expression were greatly enhanced in oxidative, but not in glycolytic muscles, suggesting that NO mediates protection against muscle wasting. NO donors enhanced iNos and antioxidant gene expression and blocked cytokine/endotoxin-induced MAFbx/atrogin-1 expression in cultured myoblasts and in skeletal muscle in vivo. Our studies reveal a novel protective mechanism in oxidative myofibers mediated by enhanced iNos and antioxidant gene expression and suggest a significant value of enhanced NO signaling as a new therapeutic strategy for cachexia.
Resumo:
The main impetus for a mini-symposium on corticothalamic interrelationships was the recent number of studies highlighting the role of the thalamus in aspects of cognition beyond sensory processing. The thalamus contributes to a range of basic cognitive behaviors that include learning and memory, inhibitory control, decision-making, and the control of visual orienting responses. Its functions are deeply intertwined with those of the better studied cortex, although the principles governing its coordination with the cortex remain opaque, particularly in higher-level aspects of cognition. How should the thalamus be viewed in the context of the rest of the brain? Although its role extends well beyond relaying of sensory information from the periphery, the main function of many of its subdivisions does appear to be that of a relay station, transmitting neural signals primarily to the cerebral cortex from a number of brain areas. In cognition, its main contribution may thus be to coordinate signals between diverse regions of the telencephalon, including the neocortex, hippocampus, amygdala, and striatum. This central coordination is further subject to considerable extrinsic control, for example, inhibition from the basal ganglia, zona incerta, and pretectal regions, and chemical modulation from ascending neurotransmitter systems. What follows is a brief review on the role of the thalamus in aspects of cognition and behavior, focusing on a summary of the topics covered in a mini-symposium held at the Society for Neuroscience meeting, 2014.
Resumo:
BACKGROUND: In patients with myelomeningocele (MMC), a high number of fractures occur in the paralyzed extremities, affecting mobility and independence. The aims of this retrospective cross-sectional study are to determine the frequency of fractures in our patient cohort and to identify trends and risk factors relevant for such fractures. MATERIALS AND METHODS: Between March 1988 and June 2005, 862 patients with MMC were treated at our hospital. The medical records, surgery reports, and X-rays from these patients were evaluated. RESULTS: During the study period, 11% of the patients (n = 92) suffered one or more fractures. Risk analysis showed that patients with MMC and thoracic-level paralysis had a sixfold higher risk of fracture compared with those with sacral-level paralysis. Femoral-neck z-scores measured by dual-energy X-ray absorptiometry (DEXA) differed significantly according to the level of neurological impairment, with lower z-scores in children with a higher level of lesion. Furthermore, the rate of epiphyseal separation increased noticeably after cast immobilization. Mainly patients who could walk relatively well were affected. CONCLUSIONS: Patients with thoracic-level paralysis represent a group with high fracture risk. According to these results, fracture and epiphyseal injury in patients with MMC should be treated by plaster immobilization. The duration of immobilization should be kept to a minimum (<4 weeks) because of increased risk of secondary fractures. Alternatively, patients with refractures can be treated by surgery, when nonoperative treatment has failed.
Resumo:
PURPOSE: Detoxification often serves as an initial contact for treatment and represents an opportunity for engaging patients in aftercare to prevent relapse. However, there is limited information concerning clinical profiles of individuals seeking detoxification, and the opportunity to engage patients in detoxification for aftercare often is missed. This study examined clinical profiles of a geographically diverse sample of opioid-dependent adults in detoxification to discern the treatment needs of a growing number of women and whites with opioid addiction and to inform interventions aimed at improving use of aftercare or rehabilitation. METHODS: The sample included 343 opioid-dependent patients enrolled in two national multi-site studies of the National Drug Abuse Treatment Clinical Trials Network (CTN001-002). Patients were recruited from 12 addiction treatment programs across the nation. Gender and racial/ethnic differences in addiction severity, human immunodeficiency virus (HIV) risk, and quality of life were examined. RESULTS: Women and whites were more likely than men and African Americans to have greater psychiatric and family/social relationship problems and report poorer health-related quality of life and functioning. Whites and Hispanics exhibited higher levels of total HIV risk scores and risky injection drug use scores than African Americans, and Hispanics showed a higher level of unprotected sexual behaviors than whites. African Americans were more likely than whites to use heroin and cocaine and to have more severe alcohol and employment problems. CONCLUSIONS: Women and whites show more psychopathology than men and African Americans. These results highlight the need to monitor an increased trend of opioid addiction among women and whites and to develop effective combined psychosocial and pharmacologic treatments to meet the diverse needs of the expanding opioid-abusing population. Elevated levels of HIV risk behaviors among Hispanics and whites also warrant more research to delineate mechanisms and to reduce their risky behaviors.
Resumo:
Existing election algorithms suffer limited scalability. This limit stems from the communication design which in turn stems from their fundamentally two-state behaviour. This paper presents a new election algorithm specifically designed to be highly scalable in broadcast networks whilst allowing any processing node to become coordinator with initially equal probability. To achieve this, careful attention has been paid to the communication design, and an additional state has been introduced. The design of the tri-state election algorithm has been motivated by the requirements analysis of a major research project to deliver robust scalable distributed applications, including load sharing, in hostile computing environments in which it is common for processing nodes to be rebooted frequently without notice. The new election algorithm is based in-part on a simple 'emergent' design. The science of emergence is of great relevance to developers of distributed applications because it describes how higher-level self-regulatory behaviour can arise from many participants following a small set of simple rules. The tri-state election algorithm is shown to have very low communication complexity in which the number of messages generated remains loosely-bounded regardless of scale for large systems; is highly scalable because nodes in the idle state do not transmit any messages; and because of its self-organising characteristics, is very stable.
Resumo:
Natural distributed systems are adaptive, scalable and fault-tolerant. Emergence science describes how higher-level self-regulatory behaviour arises in natural systems from many participants following simple rulesets. Emergence advocates simple communication models, autonomy and independence, enhancing robustness and self-stabilization. High-quality distributed applications such as autonomic systems must satisfy the appropriate nonfunctional requirements which include scalability, efficiency, robustness, low-latency and stability. However the traditional design of distributed applications, especially in terms of the communication strategies employed, can introduce compromises between these characteristics. This paper discusses ways in which emergence science can be applied to distributed computing, avoiding some of the compromises associated with traditionally-designed applications. To demonstrate the effectiveness of this paradigm, an emergent election algorithm is described and its performance evaluated. The design incorporates nondeterministic behaviour. The resulting algorithm has very low communication complexity, and is simultaneously very stable, scalable and robust.
Resumo:
This paper presents the AGILE policy expression language. The language enables powerful expression of self-managing behaviours and facilitates policy-based autonomic computing in which the policies themselves can be adapted dynamically and automatically. The language is generic so as to be deployable across a wide spectrum of application domains, and is very flexible through the use of simple yet expressive syntax and semantics. The development of AGILE is motivated by the need for adaptive policy mechanisms that are easy to deploy into legacy code and can be used by non autonomics-expert practitioners to embed self-managing behaviours with low cost and risk. A library implementation of the policy language is described. The implementation extends the state of the art in policy-based autonomics through innovations which include support for multiple policy versions of a given policy type, multiple configuration templates, and higher-level ‘meta-policies’ to dynamically select between differently configured business-logic policy instances and templates. Two dissimilar example deployment scenarios are examined.
Resumo:
There is growing concern within the profession of pharmacy regarding the numerical competency of students completing their undergraduate studies. In this 7 year study, the numerical competency of first year pharmacy undergraduate students at the School of Pharmacy, Queen's University Belfast, was assessed both on entry to the MPharm degree and after completion of a basic numeracy course during the first semester of Level 1. The results suggest that students are not retaining fundamental numeracy concepts initially taught at secondary level education, and that the level of ability has significantly decreased over the past 7 years. Keywords: Numeracy; calculations; MPharm; assessment
Resumo:
The work presented is concerned with the estimation of manufacturing cost at the concept design stage, when little technical information is readily available. The work focuses on the nose cowl sections of a wide range of engine nacelles built at Bombardier Aerospace Shorts of Belfast. A core methodology is presented that: defines manufacturing cost elements that are prominent; utilises technical parameters that are highly influential in generating those costs; establishes the linkage between these two; and builds the associated cost estimating relations into models. The methodology is readily adapted to deal with both the early and more mature conceptual design phases, which thereby highlights the generic, flexible and fundamental nature of the method. The early concept cost model simplifies cost as a cumulative element that can be estimated using higher level complexity ratings, while the mature concept cost model breaks manufacturing cost down into a number of constituents that are each driven by their own specific drivers. Both methodologies have an average error of less that ten percent when correlated with actual findings, thus achieving an acceptable level of accuracy. By way of validity and application, the research is firmly based on industrial case studies and practice and addresses the integration of design and manufacture through cost. The main contribution of the paper is the cost modelling methodology. The elemental modelling of the cost breakdown structure through materials, part fabrication, assembly and their associated drivers is relevant to the analytical design procedure, as it utilises design definition and complexity that is understood by engineers.
Resumo:
The primary intention of this paper is to review the current state of the art in engineering cost modelling as applied to aerospace. This is a topic of current interest and in addressing the literature, the presented work also sets out some of the recognised definitions of cost that relate to the engineering domain. The paper does not attempt to address the higher-level financial sector but rather focuses on the costing issues directly relevant to the engineering process, primarily those of design and manufacture. This is of more contemporary interest as there is now a shift towards the analysis of the influence of cost, as defined in more engineering related terms; in an attempt to link into integrated product and process development (IPPD) within a concurrent engineering environment. Consequently, the cost definitions are reviewed in the context of the nature of cost as applicable to the engineering process stages: from bidding through to design, to manufacture, to procurement and ultimately, to operation. The linkage and integration of design and manufacture is addressed in some detail. This leads naturally to the concept of engineers influencing and controlling cost within their own domain rather than trusting this to financers who have little control over the cause of cost. In terms of influence, the engineer creates the potential for cost and in a concurrent environment this requires models that integrate cost into the decision making process.
Resumo:
The paper is primarily concerned with the modelling of aircraft manufacturing cost. The aim is to establish an integrated life cycle balanced design process through a systems engineering approach to interdisciplinary analysis and control. The cost modelling is achieved using the genetic causal approach that enforces product family categorisation and the subsequent generation of causal relationships between deterministic cost components and their design source. This utilises causal parametric cost drivers and the definition of the physical architecture from the Work Breakdown Structure (WBS) to identify product families. The paper presents applications to the overall aircraft design with a particular focus on the fuselage as a subsystem of the aircraft, including fuselage panels and localised detail, as well as engine nacelles. The higher level application to aircraft requirements and functional analysis is investigated and verified relative to life cycle design issues for the relationship between acquisition cost and Direct Operational Cost (DOC), for a range of both metal and composite subsystems. Maintenance is considered in some detail as an important contributor to DOC and life cycle cost. The lower level application to aircraft physical architecture is investigated and verified for the WBS of an engine nacelle, including a sequential build stage investigation of the materials, fabrication and assembly costs. The studies are then extended by investigating the acquisition cost of aircraft fuselages, including the recurring unit cost and the non-recurring design cost of the airframe sub-system. The systems costing methodology is facilitated by the genetic causal cost modeling technique as the latter is highly generic, interdisciplinary, flexible, multilevel and recursive in nature, and can be applied at the various analysis levels required of systems engineering. Therefore, the main contribution of paper is a methodology for applying systems engineering costing, supported by the genetic causal cost modeling approach, whether at a requirements, functional or physical level.
Resumo:
BACKGROUND: Although severe encephalopathy has been proposed as a possible contraindication to the use of noninvasive positive-pressure ventilation (NPPV), increasing clinical reports showed it was effective in patients with impaired consciousness and even coma secondary to acute respiratory failure, especially hypercapnic acute respiratory failure (HARF). To further evaluate the effectiveness and safety of NPPV for severe hypercapnic encephalopathy, a prospective case-control study was conducted at a university respiratory intensive care unit (RICU) in patients with acute exacerbation of chronic obstructive pulmonary disease (AECOPD) during the past 3 years. METHODS: Forty-three of 68 consecutive AECOPD patients requiring ventilatory support for HARF were divided into 2 groups, which were carefully matched for age, sex, COPD course, tobacco use and previous hospitalization history, according to the severity of encephalopathy, 22 patients with Glasgow coma scale (GCS) <10 served as group A and 21 with GCS = 10 as group B. RESULTS: Compared with group B, group A had a higher level of baseline arterial partial CO2 pressure ((102 +/- 27) mmHg vs (74 +/- 17) mmHg, P <0.01), lower levels of GCS (7.5 +/- 1.9 vs 12.2 +/- 1.8, P <0.01), arterial pH value (7.18 +/- 0.06 vs 7.28 +/- 0.07, P <0.01) and partial O(2) pressure/fraction of inspired O(2) ratio (168 +/- 39 vs 189 +/- 33, P <0.05). The NPPV success rate and hospital mortality were 73% (16/22) and 14% (3/22) respectively in group A, which were comparable to those in group B (68% (15/21) and 14% (3/21) respectively, all P > 0.05), but group A needed an average of 7 cm H2O higher of maximal pressure support during NPPV, and 4, 4 and 7 days longer of NPPV time, RICU stay and hospital stay respectively than group B (P <0.05 or P <0.01). NPPV therapy failed in 12 patients (6 in each group) because of excessive airway secretions (7 patients), hemodynamic instability (2), worsening of dyspnea and deterioration of gas exchange (2), and gastric content aspiration (1). CONCLUSIONS: Selected patients with severe hypercapnic encephalopathy secondary to HARF can be treated as effectively and safely with NPPV as awake patients with HARF due to AECOPD; a trial of NPPV should be instituted to reduce the need of endotracheal intubation in patients with severe hypercapnic encephalopathy who are otherwise good candidates for NPPV due to AECOPD.
Resumo:
Extending the work presented in Prasad et al. (IEEE Proceedings on Control Theory and Applications, 147, 523-37, 2000), this paper reports a hierarchical nonlinear physical model-based control strategy to account for the problems arising due to complex dynamics of drum level and governor valve, and demonstrates its effectiveness in plant-wide disturbance handling. The strategy incorporates a two-level control structure consisting of lower-level conventional PI regulators and a higher-level nonlinear physical model predictive controller (NPMPC) for mainly set-point manoeuvring. The lower-level PI loops help stabilise the unstable drum-boiler dynamics and allow faster governor valve action for power and grid-frequency regulation. The higher-level NPMPC provides an optimal load demand (or set-point) transition by effective handling of plant-wide interactions and system disturbances. The strategy has been tested in a simulation of a 200-MW oil-fired power plant at Ballylumford in Northern Ireland. A novel approach is devized to test the disturbance rejection capability in severe operating conditions. Low frequency disturbances were created by making random changes in radiation heat flow on the boiler-side, while condenser vacuum was fluctuating in a random fashion on the turbine side. In order to simulate high-frequency disturbances, pulse-type load disturbances were made to strike at instants which are not an integral multiple of the NPMPC sampling period. Impressive results have been obtained during both types of system disturbances and extremely high rates of load changes, right across the operating range, These results compared favourably with those from a conventional state-space generalized predictive control (GPC) method designed under similar conditions.