852 resultados para technology-enhanced assessment


Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVES We sought to determine whether the transmural extent of scar (TES) explains discordances between dobutamine echocardiography (DbE) and thallium single-photon emission computed tomography (Tl-SPECT) in the detection of viable myocardium (VM). BACKGROUND Discrepancies between DbE and Tl-SPECT are often attributed to differences between contractile reserve and membrane integrity, but may also reflect a disproportionate influence of nontransmural scar on thickening at DbE. METHODS Sixty patients (age 62 +/- 12 years; 10 women and 50 men) with postinfarction left ventricular dysfunction underwent standard rest-late redistribution Tl-SPECT and DbE. Viable myocardium was identified when dysfunctional segments showed Tl activity >60% on the late-redistribution image or by low-dose augmentation at DbE. Contrast-enhanced magnetic resonance imaging (ceMRI) was used to divide TES into five groups: 0%, 75% of the wall thickness replaced by scar. RESULTS As TES increased, both the mean Tl uptake and change in wall motion score decreased significantly (both p < 0.001). However, the presence of subendocardial scar was insufficient to prevent thickening; >50% of segments still showed contractile function with TES of 25% to 75%, although residual function was uncommon with TES >75%. The relationship of both tests to increasing TES was similar, but Tl-SPECT identified VM more frequently than DbE in all groups. Among segments without scar or with small amounts of scar (50% were viable by SPECT. CONCLUSIONS Both contractile reserve and perfusion are sensitive to the extent of scar. However, contractile reserve may be impaired in the face of no or minor scar, and thickening may still occur with extensive scar. (C) 2004 by the American College of Cardiology Foundation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Proteomics, the analysis of expressed proteins, has been an important developing area of research for the past two decades [Anderson, NG, Anderson, NL. Twenty years of two-dimensional electrophoresis: past, present and future. Electrophoresis 1996;17:443-53]. Advances in technology have led to a rapid increase in applications to a wide range of samples; from initial experiments using cell lines, more complex tissues and biological fluids are now being assessed to establish changes in protein expression. A primary aim of clinical proteomics is the identification of biomarkers for diagnosis and therapeutic intervention of disease, by comparing the proteomic profiles of control and disease, and differing physiological states. This expansion into clinical samples has not been without difficulties owing to the complexity and dynamic range in plasma and human tissues including tissue biopsies. The most widely used techniques for analysis of clinical samples are surface-enhanced laser desorption/ionisation mass spectrometry (SELDI-MS) and 2-dimensional gel electrophoresis (2-DE) coupled to matrix-assisted laser desorption ionisation [Person, MD, Monks, TJ, Lau, SS. An integrated approach to identifying chemically induced posttranslational modifications using comparative MALDI-MS and targeted HPLC-ESI-MS/MS. Chem. Res. Toxicol. 2003;16:598-608]-mass spectroscopy (MALDI-MS). This review aims to summarise the findings of studies that have used proteomic research methods to analyse samples from clinical studies and to assess the impact that proteomic techniques have had in assessing clinical samples. © 2004 The Canadian Society of Clinical Chemists. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The impact and use of information and communication technology on learning outcomes for accounting students is not well understood. This study investigates the impact of design features of Blackboard 1 used as aWeb-based Learning Environment (WBLE) in teaching undergraduate accounting students. Specifically, this investigation reports on a number of Blackboard design features (e.g. delivery of lecture notes, announcements, online assessment and model answers) used to deliver learning materials regarded as necessary to enhance learning outcomes. Responses from 369 on-campus students provided data to develop a regression model that seeks to explain enhanced participation and mental effort. The final regression shows that student satisfaction with the use of a WBLE is associated with five design features or variables. These include usefulness and availability of lecture notes, online assessment, model answers, and online chat.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

New Technology Based Firms (NTBF) are considered to be important for the economic development of a country in regards to both employment growth and innovative activity. The latter is believed to contribute significantly to the increase in productivity and therefore the competitiveness of UK’s economy. This study contributes to the above literature by investigating two of the factors believed to limit the growth of such firms in the UK. The first concerns the existence of a ‘knowledge gap’ while the second the existence of a ‘financial gap’. These themes are developed along three main research lines. Firstly, based upon the human capital theory initially proposed by Backer (1964) new evidence is provided on the human capital characteristics (experience and education) of the current UK NTBF entrepreneurs. Secondly, the causal relationship between general and specific human capital (as well as their interactions) upon the company performance and growth is investigated via its traditional direct effect as well as via its indirect effect upon the access to external finance. Finally, more light is shed on the financial structure and the type of financial constraints that high-tech firms face at start-up. In particular, whether a financial gap exists is explored by distinguishing between the demand and the supply of external finance as well as by type of external source of financing. The empirical testing of the various research hypotheses has been obtained by carrying out an original survey of new technology based firms defined as independent companies, established in the past 25 years in R&D intensive sectors. The resulting dataset contains information for 412 companies on a number of general company characteristics and the characteristics of their entrepreneurs in 2004. Policy and practical implications for future and current entrepreneurs and also providers of external finance are provided.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Power generation from biomass is a sustainable energy technology which can contribute to substantial reductions in greenhouse gas emissions, but with greater potential for environmental, economic and social impacts than most other renewable energy technologies. It is important therefore in assessing bioenergy systems to take account of not only technical, but also environmental, economic and social parameters on a common basis. This work addresses the challenge of analysing, quantifying and comparing these factors for bioenergy power generation systems. A life-cycle approach is used to analyse the technical, environmental, economic and social impacts of entire bioelectricity systems, with a number of life-cycle indicators as outputs to facilitate cross-comparison. The results show that similar greenhouse gas savings are achieved with the wide variety of technologies and scales studied, but land-use efficiency of greenhouse gas savings and specific airborne emissions varied substantially. Also, while specific investment costs and electricity costs vary substantially from one system to another the number of jobs created per unit of electricity delivered remains roughly constant. Recorded views of stakeholders illustrate that diverging priorities exist for different stakeholder groups and this will influence appropriate choice of bioenergy systems for different applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Renewable energy project development is highly complex and success is by no means guaranteed. Decisions are often made with approximate or uncertain information yet the current methods employed by decision-makers do not necessarily accommodate this. Levelised energy costs (LEC) are one such commonly applied measure utilised within the energy industry to assess the viability of potential projects and inform policy. The research proposes a method for achieving this by enhancing the traditional discounting LEC measure with fuzzy set theory. Furthermore, the research develops the fuzzy LEC (F-LEC) methodology to incorporate the cost of financing a project from debt and equity sources. Applied to an example bioenergy project, the research demonstrates the benefit of incorporating fuzziness for project viability, optimal capital structure and key variable sensitivity analysis decision-making. The proposed method contributes by incorporating uncertain and approximate information to the widely utilised LEC measure and by being applicable to a wide range of energy project viability decisions. © 2013 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The principal theme of this thesis is the identification of additional factors affecting, and consequently to better allow, the prediction of soft contact lens fit. Various models have been put forward in an attempt to predict the parameters that influence soft contact lens fit dynamics; however, the factors that influence variation in soft lens fit are still not fully understood. The investigations in this body of work involved the use of a variety of different imaging techniques to both quantify the anterior ocular topography and assess lens fit. The use of Anterior-Segment Optical Coherence Tomography (AS-OCT) allowed for a more complete characterisation of the cornea and corneoscleral profile (CSP) than either conventional keratometry or videokeratoscopy alone, and for the collection of normative data relating to the CSP for a substantial sample size. The scleral face was identified as being rotationally asymmetric, the mean corneoscleral junction (CSJ) angle being sharpest nasally and becoming progressively flatter at the temporal, inferior and superior limbal junctions. Additionally, 77% of all CSJ angles were within ±50 of 1800, demonstrating an almost tangential extension of the cornea to form the paralimbal sclera. Use of AS-OCT allowed for a more robust determination of corneal diameter than that of white-to-white (WTW) measurement, which is highly variable and dependent on changes in peripheral corneal transparency. Significant differences in ocular topography were found between different ethnicities and sexes, most notably for corneal diameter and corneal sagittal height variables. Lens tightness was found to be significantly correlated with the difference between horizontal CSJ angles (r =+0.40, P =0.0086). Modelling of the CSP data gained allowed for prediction of up to 24% of the variance in contact lens fit; however, it was likely that stronger associations and an increase in the modelled prediction of variance in fit may have occurred had an objective method of lens fit assessment have been made. A subsequent investigation to determine the validity and repeatability of objective contact lens fit assessment using digital video capture showed no significant benefit over subjective evaluation. The technique, however, was employed in the ensuing investigation to show significant changes in lens fit between 8 hours (the longest duration of wear previously examined) and 16 hours, demonstrating that wearing time is an additional factor driving lens fit dynamics. The modelling of data from enhanced videokeratoscopy composite maps alone allowed for up to 77% of the variance in soft contact lens fit, and up to almost 90% to be predicted when used in conjunction with OCT. The investigations provided further insight into the ocular topography and factors affecting soft contact lens fit.