894 resultados para performance comparison


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lexical-semantic and syntactic abilities of a group of individuals with chronic nonthalamic subcortical (NS) lesions following stroke (n = 6) were investigated using the Western Aphasia Battery (WAB) picture description task [Kertesz, A. (1982). The Western aphasia battery. New York: Grune and Stratton] and compared with those of a group of subjects with Huntington's Disease (HD) (n = 6) and a nonneurologically impaired control group (n = 6) matched for age, sex, and educational level. The performance of the NS and HD subjects did not differ significantly from the well controls on measures of lexical-semantic abilities. NS and HD subjects provided as much information about the target picture as control subjects, but produced fewer action information units. Analysis of syntactic abilities revealed that the HD subjects produced significantly more grammatical errors than both the NS and control subjects and that the NS group performed in a similar manner to control subjects. These findings are considered in terms of current theories of subcortical language function Learning outcomes: As a result of this activity, the reader will obtain information about the debate surrounding the role of subcortical language mechanisms and be provided with new information on the comparative picture description abilities of individuals with known vascular and degenerative subcortical pathologies and healthy control participants. (c) 2005 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Therapeutic monitoring with dosage individualization of sirolimus drug therapy is standard clinical practice for organ transplant recipients. For several years sirolimus monitoring has been restricted as a result of lack of an immunoassay. The recent reintroduction of the microparticle enzyme immunoassay (MEIA (R)) for sirolimus on the IMx (R) analyser has the potential to address this situation. This Study, using patient samples, has compared the MEIA (R) sirolimus method with an established HPLC-tandem mass spectrometry method (HPLC-MS/MS). An established HPLC-UV assay was used for independent cross-validation. For quality control materials (5, 11, 22 mu g/L), the MEIA (R) showed acceptable validation criteria based on intra-and inter-run precision (CV) and accuracy (bias) of < 8% and < 13%, respectively. The lower limit of quantitation was found to be approximately 3 mu g/L. The performance of the immunoassay was compared with HPLC-MS/MS using EDTA whole-blood samples obtained from various types of organ transplant recipients (n = 116). The resultant Deming regression line was: MEIA = 1.3 x HPLC-MS/MS+ 1.3 (r = 0.967, s(y/x) = 1) with a mean bias of 49.2% +/- 23.1 % (range, -2.4% to 128%; P < 0.001). The reason for the large and variable bias was not explored in this study, but the sirolimus-metabolite cross-reactivity with the MEIA (R) antibody could be a substantive contributing factor. Whereas the MEIA (R) sirolimus method may be an adjunct to sirolimus dosage individualization in transplant recipients, users must consider the implications of the substantial and variable bias when interpreting results. In selected patients where difficult clinical issues arise, reference to a specific chromatographic method may be required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Saturated phospholipids (PCs), particularly dipalmitoylphosphatidylcholine (DPPC), predominate in surfactant lining the alveoli, although little is known about the relationship between saturated and unsaturated PCs on the outer surface of the lung, the pleura. Seven healthy cats were anesthetized and a bronchoalveolar lavage (BAL) was performed, immediately followed by a pleural lavage (PL). Lipid was extracted from lavage fluid and then analyzed for saturated, primarily dipalmitoylphosphatidylcholine (DPPC), and unsaturated PC species using high-performance liquid chromatography (HPLC) with combined fluorescence and ultraviolet detection. Dilution of epithelial lining fluid (ELF) in lavage fluids was corrected for using the urea method. The concentration of DPPC in BAL fluid (85.3 +/- 15.7 mu g/mL) was significantly higher (P=0.021) than unsaturated PCs (similar to 40 mu g/mL). However, unsaturated PCs (similar to 34 mu g/mL), particularly stearoyl-linoleoyl-phosphatidylcholine (SLPC; 17.4 +/- 6.8), were significantly higher (P = 0.021) than DPPC (4.3 +/- 1.8 mu g/mL) in PL fluid. These results show that unsaturated PCs appear functionally more important in the pleural cavity, which may have implications for surfactant replenishment following pleural disease or thoracic surgery. (c) 2005 Published by Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Determination of the subcellular location of a protein is essential to understanding its biochemical function. This information can provide insight into the function of hypothetical or novel proteins. These data are difficult to obtain experimentally but have become especially important since many whole genome sequencing projects have been finished and many resulting protein sequences are still lacking detailed functional information. In order to address this paucity of data, many computational prediction methods have been developed. However, these methods have varying levels of accuracy and perform differently based on the sequences that are presented to the underlying algorithm. It is therefore useful to compare these methods and monitor their performance. Results: In order to perform a comprehensive survey of prediction methods, we selected only methods that accepted large batches of protein sequences, were publicly available, and were able to predict localization to at least nine of the major subcellular locations (nucleus, cytosol, mitochondrion, extracellular region, plasma membrane, Golgi apparatus, endoplasmic reticulum (ER), peroxisome, and lysosome). The selected methods were CELLO, MultiLoc, Proteome Analyst, pTarget and WoLF PSORT. These methods were evaluated using 3763 mouse proteins from SwissProt that represent the source of the training sets used in development of the individual methods. In addition, an independent evaluation set of 2145 mouse proteins from LOCATE with a bias towards the subcellular localization underrepresented in SwissProt was used. The sensitivity and specificity were calculated for each method and compared to a theoretical value based on what might be observed by random chance. Conclusion: No individual method had a sufficient level of sensitivity across both evaluation sets that would enable reliable application to hypothetical proteins. All methods showed lower performance on the LOCATE dataset and variable performance on individual subcellular localizations was observed. Proteins localized to the secretory pathway were the most difficult to predict, while nuclear and extracellular proteins were predicted with the highest sensitivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calculating the potentials on the heart’s epicardial surface from the body surface potentials constitutes one form of inverse problems in electrocardiography (ECG). Since these problems are ill-posed, one approach is to use zero-order Tikhonov regularization, where the squared norms of both the residual and the solution are minimized, with a relative weight determined by the regularization parameter. In this paper, we used three different methods to choose the regularization parameter in the inverse solutions of ECG. The three methods include the L-curve, the generalized cross validation (GCV) and the discrepancy principle (DP). Among them, the GCV method has received less attention in solutions to ECG inverse problems than the other methods. Since the DP approach needs knowledge of norm of noises, we used a model function to estimate the noise. The performance of various methods was compared using a concentric sphere model and a real geometry heart-torso model with a distribution of current dipoles placed inside the heart model as the source. Gaussian measurement noises were added to the body surface potentials. The results show that the three methods all produce good inverse solutions with little noise; but, as the noise increases, the DP approach produces better results than the L-curve and GCV methods, particularly in the real geometry model. Both the GCV and L-curve methods perform well in low to medium noise situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper derives the performance union bound of space-time trellis codes in orthogonal frequency division multiplexing system (STTC-OFDM) over quasi-static frequency selective fading channels based on the distance spectrum technique. The distance spectrum is the enumeration of the codeword difference measures and their multiplicities by exhausted searching through all the possible error event paths. Exhaustive search approach can be used for low memory order STTC with small frame size. However with moderate memory order STTC and moderate frame size the computational cost of exhaustive search increases exponentially, and may become impractical for high memory order STTCs. This requires advanced computational techniques such as Genetic Algorithms (GAS). In this paper, a GA with sharing function method is used to locate the multiple solutions of the distance spectrum for high memory order STTCs. Simulation evaluates the performance union bound and the complexity comparison of non-GA aided and GA aided distance spectrum techniques. It shows that the union bound give a close performance measure at high signal-to-noise ratio (SNR). It also shows that GA sharing function method based distance spectrum technique requires much less computational time as compared with exhaustive search approach but with satisfactory accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performances of five different ESI sources coupled to a polystyrene-divinylbenzene monolithic column were compared in a series of LC-ESI-MS/MS analyses of Escherichia coli outer membrane proteins. The sources selected for comparison included two different modifications of the standard electrospray source, a commercial low-flow sprayer, a stainless steel nanospray needle and a coated glass Picotip. Respective performances were judged on sensitivity and the number and reproducibility of significant protein identifications obtained through the analysis of multiple identical samples. Data quality varied between that of a ground silica capillary, with 160 total protein identifications, the lowest number of high quality peptide hits obtained (3012), and generally peaks of lower intensity; and a stainless steel nanospray needle, which resulted in increased precursor ion abundance, the highest-quality peptide fragmentation spectra (5414) and greatest number of total protein identifications (259) exhibiting the highest MASCOT scores (average increase in score of 27.5% per identified protein). The data presented show that, despite increased variability in comparative ion intensity, the stainless steel nanospray needle provides the highest overall sensitivity. However, the resulting data were less reproducible in terms of proteins identified in complex mixtures -- arguably due to an increased number of high intensity precursor ion candidates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear models reach their limitations in applications with nonlinearities in the data. In this paper new empirical evidence is provided on the relative Euro inflation forecasting performance of linear and non-linear models. The well established and widely used univariate ARIMA and multivariate VAR models are used as linear forecasting models whereas neural networks (NN) are used as non-linear forecasting models. It is endeavoured to keep the level of subjectivity in the NN building process to a minimum in an attempt to exploit the full potentials of the NN. It is also investigated whether the historically poor performance of the theoretically superior measure of the monetary services flow, Divisia, relative to the traditional Simple Sum measure could be attributed to a certain extent to the evaluation of these indices within a linear framework. Results obtained suggest that non-linear models provide better within-sample and out-of-sample forecasts and linear models are simply a subset of them. The Divisia index also outperforms the Simple Sum index when evaluated in a non-linear framework. © 2005 Taylor & Francis Group Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The latency variation of the P100M from minute to minute, between morning and afternoon and from day to day was investigated in an unshielded environment using two single channel magnetometers. Latency variation was greatest from minute to minute with relatively little longer term variation. The two magnetometers differed both in mean latency and in the degree of variation. This may be attributed to variation in the performance of the filters which were set a narrow bandwidth for recording in an unshielded environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is known that distillation tray efficiency depends on the liquid flow pattern, particularly for large diameter trays. Scale·up failures due to liquid channelling have occurred, and it is known that fitting flow control devices to trays sometirr.es improves tray efficiency. Several theoretical models which explain these observations have been published. Further progress in understanding is at present blocked by lack of experimental measurements of the pattern of liquid concentration over the tray. Flow pattern effects are expected to be significant only on commercial size trays of a large diameter and the lack of data is a result of the costs, risks and difficulty of making these measurements on full scale production columns. This work presents a new experiment which simulates distillation by water cooling. and provides a means of testing commercial size trays in the laboratory. Hot water is fed on to the tray and cooled by air forced through the perforations. The analogy between heat and mass transfer shows that the water temperature at any point is analogous to liquid concentration and the enthalpy of the air is analogous to vapour concentration. The effect of the liquid flow pattern on mass transfer is revealed by the temperature field on the tray. The experiment was implemented and evaluated in a column of 1.2 m. dia. The water temperatures were measured by thennocouples interfaced to an electronic computerised data logging system. The "best surface" through the experimental temperature measurements was obtained by the mathematical technique of B. splines, and presented in tenos of lines of constant temperature. The results revealed that in general liquid channelling is more imponant in the bubbly "mixed" regime than in the spray regime. However, it was observed that severe channelling also occurred for intense spray at incipient flood conditions. This is an unexpected result. A computer program was written to calculate point efficiency as well as tray efficiency, and the results were compared with distillation efficiencies for similar loadings. The theoretical model of Porter and Lockett for predicting distillation was modified to predict water cooling and the theoretical predictions were shown to be similar to the experimental temperature profiles. A comparison of the repeatability of the experiments with an errors analysis revealed that accurate tray efficiency measurements require temperature measurements to better than ± 0.1 °c which is achievable with conventional techniques. This was not achieved in this work, and resulted in considerable scatter in the efficiency results. Nevertheless it is concluded that the new experiment is a valuable tool for investigating the effect of the liquid flow pattern on tray mass transfer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a comparison of integrated biomass to electricity systems on the basis of their efficiency, capital cost and electricity production cost. Four systems are evaluated: combustion to raise steam for a steam cycle; atmospheric gasification to produce fuel gas for a dual fuel diesel engine; pressurised gasification to produce fuel gas for a gas turbine combined cycle; and fast pyrolysis to produce pyrolysis liquid for a dual fuel diesel engine. The feedstock in all cases is wood in chipped form. This is the first time that all three thermochemical conversion technologies have been compared in a single, consistent evaluation.The systems have been modelled from the transportation of the wood chips through pretreatment, thermochemical conversion and electricity generation. Equipment requirements during pretreatment are comprehensively modelled and include reception, storage, drying and communication. The de-coupling of the fast pyrolysis system is examined, where the fast pyrolysis and engine stages are carried out at separate locations. Relationships are also included to allow learning effects to be studied. The modelling is achieved through the use of multiple spreadsheets where each spreadsheet models part of the system in isolation and the spreadsheets are combined to give the cost and performance of a whole system.The use of the models has shown that on current costs the combustion system remains the most cost-effective generating route, despite its low efficiency. The novel systems only produce lower cost electricity if learning effects are included, implying that some sort of subsidy will be required during the early development of the gasification and fast pyrolysis systems to make them competitive with the established combustion approach. The use of decoupling in fast pyrolysis systems is a useful way of reducing system costs if electricity is required at several sites because• a single pyrolysis site can be used to supply all the generators, offering economies of scale at the conversion step. Overall, costs are much higher than conventional electricity generating costs for fossil fuels, due mainly to the small scales used. Biomass to electricity opportunities remain restricted to niche markets where electricity prices are high or feed costs are very low. It is highly recommended that further work examines possibilities for combined beat and power which is suitable for small scale systems and could increase revenues that could reduce electricity prices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis looks at two issues. Firstly, statistical work was undertaken examining profit margins, labour productivity and total factor productivity in telecommunications in ten member states of the EU over a 21-year period (not all member states of the EU could be included due to data inadequacy). Also, three non-members, namely Switzerland, Japan and US, were included for comparison. This research was to provide an understanding of how telecoms in the European Union (EU) have developed. There are two propositions in this part of the thesis: (i) privatisation and market liberalisation improve performance; (ii) countries that liberalised their telecoms sectors first show a better productivity growth than countries that liberalised later. In sum, a mixed picture is revealed. Some countries performed better than others over time, but there is no apparent relationship between productivity performance and the two propositions. Some of the results from this part of the thesis were published in Dabler et al. (2002). Secondly, the remainder of the tests the proposition that the telecoms directives of the European Commission created harmonised regulatory systems in the member states of the EU. By undertaking explanatory research, this thesis not only seeks to establish whether harmonisation has been achieved, but also tries to find an explanation as to why this is so. To accomplish this, as a first stage to questionnaire survey was administered to the fifteen telecoms regulators in the EU. The purpose of the survey was to provide knowledge of methods, rationales and approaches adopted by the regulatory offices across the EU. This allowed for the decision as to whether harmonisation in telecoms regulation has been achieved. Stemming from the results of the questionnaire analysis, follow-up case studies with four telecoms regulators were undertaken, in a second stage of this research. The objective of these case studies was to take into account the country-specific circumstances of telecoms regulation in the EU. To undertake the case studies, several sources of evidence were combined. More specifically, the annual Implementation Reports of the European Commission were reviewed, alongside the findings from the questionnaire. Then, interviews with senior members of staff in the four regulatory authorities were conducted. Finally, the evidence from the questionnaire survey and from the case studies was corroborated to provide an explanation as to why telecoms regulation in the EU has reached or has not reached a state of harmonisation. In addition to testing whether harmonisation has been achieved and why, this research has found evidence of different approaches to control over telecoms regulators and to market intervention administered by telecoms regulators within the EU. Regarding regulatory control, it was found that some member states have adopted mainly a proceduralist model, some have implemented more of a substantive model, and others have adopted a mix between both. Some findings from the second stage of the research were published in Dabler and Parker (2004). Similarly, regarding market intervention by regulatory authorities, different member states treat market intervention differently, namely according to market-driven or non-market-driven models, or a mix between both approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Britain's sea and flood defences are becoming increasingly aged and as a consequence, more fragile and vulnerable. As the government's philosophy on resources shifts against the use of prime quarried and dredged geo-materials, the need to find alternative bulk materials to bolster Britain's prone defences becomes more pressing. One conceivable source for such a material is colliery waste or minestone. Although a plethora of erosion-abrasion studies have been carried out on soils and soil-cements, very little research has been undertaken to determine the resistance of minestone and its cement stabilized form to the effects of water erosion. The thesis reviews the current extent to which soil-cements, minestone and cement stabilized minestone (CSM) have been employed for hydraulic construction projects. A synopsis is also given on the effects of immersion on shales, mudstones and minestone, especially with regard to the phenomena of slaking. A laboratory study was undertaken featuring a selection of minestones from several British coalfields. The stability of minestone and CSM in sea water and distilled water was assessed using slaking tests and immersion monitoring and the bearing on the use of these materials for hydraulic construction is discussed. Following a review of current erosion apparatus, the erosion/abrasion test and rotating cylinder device were chosen and employed to assess the erosion resistance of minestone and CSM. Comparison was made with a sand mix designed to represent a dredged sand, the more traditional, bulk hydraulic construction material. The results of the erosion study suggest that both minestone and CSM were more resistant to erosion and abrasion than equivalently treated sand mixes. The greater resistance of minestone to the agents of erosion and abrasion is attributed to several factors including the size of the particles, a greater degree of cement bonding and the ability of the minestone aggregate to absorb, rather than transmit shock waves produced by impacting abrasive particles. Although minestone is shown to be highly unstable when subjected to cyclic changes in its moisture content, the study suggests that even in an intertidal regime where cyclic immersion does takes place, minestone will retain sufficient moisture within its fabric to prevent slaking from taking place. The slaking study reveals a close relationship between slaking susceptibility and total pore surface area as revealed by porosimetry. The immersion study shows that although the fabric of CSM is rapidly attacked in sea water, a high degree of the disruption is associated with the edges and corners of samples (ie. free surface) while the integrity of the internal fabric remains relatively intact. CSM samples were shown to be resilient when subjected to immersion in distilled water. An overall assessment of minestone and CSM would suggest that with the implementation of judicious selection and appropriate quality control they could be used as alternative materials for flood and sea defences. It is believed, that even in the harsh regime of a marine environment, CSM could be employed for temporary and sacrificial schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recording of visual acuity using the Snellen letter chart is only a limited measure of the visual performance of an eye wearing a refractive aid. Qualitative in addition to quantitative information is required to establish such a parameter: spatial, temporal and photometric aspects must all be incorporated into the test procedure. The literature relating to the correction of ametropia by refractive aids was reviewed. Selected aspects of a comparison between the correction provided by spectacles and contact lenses were considered. Special attention was directed to soft hydrophilic contact lenses. Despite technological advances which have produced physiologically acceptable soft lenses, there still remain associated with this recent form of refractive aid unpredictable visual factors. Several techniques for vision assessment were described, and previous studies of visual performance were discussed. To facilitate the investigation of visual performance in a clinical environment, a new semi-automated system was described: this utilized the presentation of broken ring test stimuli on a television screen. The research project comprised two stages. Initial work was concerned with the validation of the television system, including the optimization of its several operational variables. The second phase involved the utilization of the system in an investigation of visual performance aspects of the first month of regular daily soft contact lens wear by experimentally-naive subjects. On the basis of the results of this work an ‘homoeostatic’ model has been proposed to represent the strategy which an observer adopts in order to optimize his visual performance with soft contact lenses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we report a comparative analysis of the factors which contribute to the innovation performance of manufacturing firms in the US state of Georgia, and three European regions, the UK regions of Wales and the West Midlands, and the Spanish region of Catalonia. We consider the factors which shape firms’ ability to generate new products and processes and undertake various forms of organisational and structural change. We are particularly concerned with how firms collect the knowledge on which they base their innovation and their effectiveness in translating that knowledge into new innovations. Three main empirical conclusions result. First, US firms have more widespread links to external knowledge sources than those in Europe and notably the universities make a greater contribution to innovation in the US than in Europe. Second, UK firms prove more effective at capturing synergies between their innovation activities than US and Catalan firms. Third, firms’ operating environment proves more conducive to innovation in the US than in either the UK regions or Catalonia. Our results suggest the potential for mutual learning. For the UK there are lessons in terms of the way in which the universities in Georgia are supporting innovation. For firms in Georgia and in Catalonia the potential lessons are more strategic or organisational and relate to how they can better capture potential synergies between their innovation activities.