897 resultados para Application efficiency
Resumo:
Cysteine cathepsins, such as cathepsin S (CTSS), are implicated in the pathology of a wide range of diseases and are of potential utility as diagnostic and prognostic biomarkers. In previous work, we demonstrated the potency and efficiency of a biotinylated diazomethylketone (DMK)-based activity-based probe (ABP), biotin-PEG-LVG-DMK, for disclosure of recombinant CTSS and CTSS in cell lysates. However, the limited cell permeability of both the biotin and spacer groups restricted detection of CTSS to cell lysates. The synthesis and characterisation of a cell permeable ABP to report on intracellular CTSS activity is reported. The ABP, Z-PraVG-DMK, a modified peptidyl diazomethylketone, was based on the N-terminus of human cystatin motif (Leu-Val-Gly). The leucine residue was substituted for the alkyne-bearing proparcylglycine to facilitate conjugation of an azide-tagged reporter group using click chemistry, following irreversible inhibition of CTSS. When incubated with viable Human Embryonic Kidney 293 cells, Z-PraVG-DMK permitted disclosure of CTSS activity following cell lysis and rhodamine azide conjugation, by employing standard click chemistry protocols. Furthermore, the fluorescent tag facilitated direct detection of CTSS using in-gel fluorescent scanning, obviating the necessity for downstream biotin-streptavidin conjugation and detection procedures.
Resumo:
Ground-source heat pump (GSHP) systems represent one of the most promising techniques for heating and cooling in buildings. These systems use the ground as a heat source/sink, allowing a better efficiency thanks to the low variations of the ground temperature along the seasons. The ground-source heat exchanger (GSHE) then becomes a key component for optimizing the overall performance of the system. Moreover, the short-term response related to the dynamic behaviour of the GSHE is a crucial aspect, especially from a regulation criteria perspective in on/off controlled GSHP systems. In this context, a novel numerical GSHE model has been developed at the Instituto de Ingeniería Energética, Universitat Politècnica de València. Based on the decoupling of the short-term and the long-term response of the GSHE, the novel model allows the use of faster and more precise models on both sides. In particular, the short-term model considered is the B2G model, developed and validated in previous research works conducted at the Instituto de Ingeniería Energética. For the long-term, the g-function model was selected, since it is a previously validated and widely used model, and presents some interesting features that are useful for its combination with the B2G model. The aim of the present paper is to describe the procedure of combining these two models in order to obtain a unique complete GSHE model for both short- and long-term simulation. The resulting model is then validated against experimental data from a real GSHP installation.
Resumo:
Monitoring multiple myeloma patients for relapse requires sensitive methods to measure minimal residual disease and to establish a more precise prognosis. The present study aimed to standardize a real-time quantitative polymerase chain reaction (PCR) test for the IgH gene with a JH consensus self-quenched fluorescence reverse primer and a VDJH or DJH allele-specific sense primer (self-quenched PCR). This method was compared with allele-specific real-time quantitative PCR test for the IgH gene using a TaqMan probe and a JH consensus primer (TaqMan PCR). We studied nine multiple myeloma patients from the Spanish group treated with the MM2000 therapeutic protocol. Self-quenched PCR demonstrated sensitivity of >or=10(-4) or 16 genomes in most cases, efficiency was 1.71 to 2.14, and intra-assay and interassay reproducibilities were 1.18 and 0.75%, respectively. Sensitivity, efficiency, and residual disease detection were similar with both PCR methods. TaqMan PCR failed in one case because of a mutation in the JH primer binding site, and self-quenched PCR worked well in this case. In conclusion, self-quenched PCR is a sensitive and reproducible method for quantifying residual disease in multiple myeloma patients; it yields similar results to TaqMan PCR and may be more effective than the latter when somatic mutations are present in the JH intronic primer binding site.
Resumo:
Development of reliable methods for optimised energy storage and generation is one of the most imminent challenges in modern power systems. In this paper an adaptive approach to load leveling problem using novel dynamic models based on the Volterra integral equations of the first kind with piecewise continuous kernels. These integral equations efficiently solve such inverse problem taking into account both the time dependent efficiencies and the availability of generation/storage of each energy storage technology. In this analysis a direct numerical method is employed to find the least-cost dispatch of available storages. The proposed collocation type numerical method has second order accuracy and enjoys self-regularization properties, which is associated with confidence levels of system demand. This adaptive approach is suitable for energy storage optimisation in real time. The efficiency of the proposed methodology is demonstrated on the Single Electricity Market of Republic of Ireland and Northern Ireland.
Resumo:
Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.
Resumo:
The low-temperature low-pressure hydrogen based plasmas were used to study the influence of processes and discharge conditions on corrosion removal. The capacitive coupled RF discharge in the continuous or pulsed regime was used at operating pressure of 100-200 Pa. Plasma treatment was monitored by optical emission spectroscopy. To be able to study influence of various process parameters, the model corroded samples with and without sandy incrustation were prepared. The SEM-EDX analyzes were carried out to verify corrosion removal efficiency. Experimental conditions were optimized for the selected most frequent materials of original metallic archaeological objects (iron, bronze, copper, and brass). Chlorides removal is based on hydrogen ion reactions while oxides are removed mainly by neutral species interactions. A special focus was kept for the samples temperature because it was necessary to avoid any metallographic changes in the material structure. The application of higher power pulsed regime with low duty cycle seems be the best treatment regime. The low pressure hydrogen plasma is not applicable for objects with a very broken structure or for nonmetallic objects due to the non-uniform heat stress. Due to this fact, the new developed plasmas generated in liquids were applied on selected original archaeological glass materials.
Resumo:
Hotel chains have access to a treasure trove of “big data” on individual hotels’ monthly electricity and water consumption. Benchmarked comparisons of hotels within a specific chain create the opportunity to cost-effectively improve the environmental performance of specific hotels. This paper describes a simple approach for using such data to achieve the joint goals of reducing operating expenditure and achieving broad sustainability goals. In recent years, energy economists have used such “big data” to generate insights about the energy consumption of the residential, commercial, and industrial sectors. Lessons from these studies are directly applicable for the hotel sector. A hotel’s administrative data provide a “laboratory” for conducting random control trials to establish what works in enhancing hotel energy efficiency.
Resumo:
A new method for the evaluation of the efficiency of parabolic trough collectors, called Rapid Test Method, is investigated at the Solar Institut Jülich. The basic concept is to carry out measurements under stagnation conditions. This allows a fast and inexpensive process due to the fact that no working fluid is required. With this approach, the temperature reached by the inner wall of the receiver is assumed to be the stagnation temperature and hence the average temperature inside the collector. This leads to a systematic error which can be rectified through the introduction of a correction factor. A model of the collector is simulated with COMSOL Multipyisics to study the size of the correction factor depending on collector geometry and working conditions. The resulting values are compared with experimental data obtained at a test rig at the Solar Institut Jülich. These results do not match with the simulated ones. Consequentially, it was not pos-sible to verify the model. The reliability of both the model with COMSOL Multiphysics and of the measurements are analysed. The influence of the correction factor on the rapid test method is also studied, as well as the possibility of neglecting it by measuring the receiver’s inner wall temperature where it receives the least amount of solar rays. The last two chapters analyse the specific heat capacity as a function of pressure and tem-perature and present some considerations about the uncertainties on the efficiency curve obtained with the Rapid Test Method.
Resumo:
This paper examines whether restaurant reservations should be locked to specific tables at the time the reservation is made, or whether the reservations should be pooled and assigned to tables in real-time. In two motivating studies, we find that there is a lack of consensus in the restaurant industry on handling reservations. Contrary to what might be expected based on research that shows the benefits of resource pooling in other contexts, a survey of 425 restaurants indicated that over 80% lock reservations to tables. In two simulation studies, we determine that pooling reservations enables a 15-minute reduction in table turn times more than 15% of the time, which consequently increases service efficiency and enables a restaurant to serve more customers during peak periods. Pooling had the most consistent advantage with higher customer service levels, with larger restaurants, with customers who arrive late, and with larger variation in customer arrival time.
Resumo:
“E’ vero che stiamo parlando di sport americani (dove i soldi sembrano non finire mai) ed è altrettanto vero che stiamo per inoltrarci nel mondo della National Basketball Association (la seconda Lega al mondo per introiti dietro alla sola National Football League), ma le cifre del nuovo contratto televisivo portato a casa dal commissioner Adam Silver hanno dell’incredibile.” ... “E’ d’obbligo al fine del proseguo di questo lavoro introdurre il PER (il Player Efficiency Rating). Stiamo parlando di quello che al momento è considerato dalla maggioranza il più avanzato strumento di valutazione delle prestazioni di un giocatore durante l’arco di una partita e quindi di una stagione.” ... “Se abbiamo standardizzato rispetto ai minuti giocati, al numero di possessi, ecc… perché non standardizzare anche rispetto allo stipendio percepito dal giocatore stesso?” ... “Da tutto ciò… comincia la mia idea.”
Resumo:
Le Système Stockage de l’Énergie par Batterie ou Batterie de Stockage d’Énergie (BSE) offre de formidables atouts dans les domaines de la production, du transport, de la distribution et de la consommation d’énergie électrique. Cette technologie est notamment considérée par plusieurs opérateurs à travers le monde entier, comme un nouveau dispositif permettant d’injecter d’importantes quantités d’énergie renouvelable d’une part et d’autre part, en tant que composante essentielle aux grands réseaux électriques. De plus, d’énormes avantages peuvent être associés au déploiement de la technologie du BSE aussi bien dans les réseaux intelligents que pour la réduction de l’émission des gaz à effet de serre, la réduction des pertes marginales, l’alimentation de certains consommateurs en source d’énergie d’urgence, l’amélioration de la gestion de l’énergie, et l’accroissement de l’efficacité énergétique dans les réseaux. Cette présente thèse comprend trois étapes à savoir : l’Étape 1 - est relative à l’utilisation de la BSE en guise de réduction des pertes électriques ; l’Étape 2 - utilise la BSE comme élément de réserve tournante en vue de l’atténuation de la vulnérabilité du réseau ; et l’Étape 3 - introduit une nouvelle méthode d’amélioration des oscillations de fréquence par modulation de la puissance réactive, et l’utilisation de la BSE pour satisfaire la réserve primaire de fréquence. La première Étape, relative à l’utilisation de la BSE en vue de la réduction des pertes, est elle-même subdivisée en deux sous-étapes dont la première est consacrée à l’allocation optimale et le seconde, à l’utilisation optimale. Dans la première sous-étape, l’Algorithme génétique NSGA-II (Non-dominated Sorting Genetic Algorithm II) a été programmé dans CASIR, le Super-Ordinateur de l’IREQ, en tant qu’algorithme évolutionniste multiobjectifs, permettant d’extraire un ensemble de solutions pour un dimensionnement optimal et un emplacement adéquat des multiple unités de BSE, tout en minimisant les pertes de puissance, et en considérant en même temps la capacité totale des puissances des unités de BSE installées comme des fonctions objectives. La première sous-étape donne une réponse satisfaisante à l’allocation et résout aussi la question de la programmation/scheduling dans l’interconnexion du Québec. Dans le but de réaliser l’objectif de la seconde sous-étape, un certain nombre de solutions ont été retenues et développées/implantées durant un intervalle de temps d’une année, tout en tenant compte des paramètres (heure, capacité, rendement/efficacité, facteur de puissance) associés aux cycles de charge et de décharge de la BSE, alors que la réduction des pertes marginales et l’efficacité énergétique constituent les principaux objectifs. Quant à la seconde Étape, un nouvel indice de vulnérabilité a été introduit, formalisé et étudié ; indice qui est bien adapté aux réseaux modernes équipés de BES. L’algorithme génétique NSGA-II est de nouveau exécuté (ré-exécuté) alors que la minimisation de l’indice de vulnérabilité proposé et l’efficacité énergétique représentent les principaux objectifs. Les résultats obtenus prouvent que l’utilisation de la BSE peut, dans certains cas, éviter des pannes majeures du réseau. La troisième Étape expose un nouveau concept d’ajout d’une inertie virtuelle aux réseaux électriques, par le procédé de modulation de la puissance réactive. Il a ensuite été présenté l’utilisation de la BSE en guise de réserve primaire de fréquence. Un modèle générique de BSE, associé à l’interconnexion du Québec, a enfin été proposé dans un environnement MATLAB. Les résultats de simulations confirment la possibilité de l’utilisation des puissances active et réactive du système de la BSE en vue de la régulation de fréquence.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Conventional wisdom in many agricultural systems across the world is that farmers cannot, will not, or should not pay the full costs associated with surface water delivery. Across Organisation for Economic Co-operation and Development (OECD) countries, only a handful can claim complete recovery of operation, maintenance, and capital costs; across Central and South Asia, fees are lower still, with farmers in Nepal, India, and Kazakhstan paying fractions of a U.S. penny for a cubic meter of water. In Pakistan, fees amount to roughly USD 1-2 per acre per season. However, farmers in Pakistan spend orders of magnitude more for diesel fuel to pump groundwater each season, suggesting a latent willingness to spend for water that, under the right conditions, could potentially be directed toward water-use fees for surface water supply. Although overall performance could be expected to improve with greater cost recovery, asymmetric access to water in canal irrigation systems leaves the question open as to whether those benefits would be equitably shared among all farmers in the system. We develop an agent-based model (ABM) of a small irrigation command to examine efficiency and equity outcomes across a range of different cost structures for the maintenance of the system, levels of market development, and assessed water charges. We find that, robust to a range of different cost and structural conditions, increased water charges lead to gains in both efficiency and concomitant improvements in equity as investments in canal infrastructure and system maintenance improve the conveyance of water resources further down watercourses. This suggests that, under conditions in which (1) farmers are currently spending money to pump groundwater to compensate for a failing surface water system, and (2) there is the possibility that through initial investment to provide perceptibly better water supply, genuine win-win solutions can be attained through higher water-use fees to beneficiary farmers.
Resumo:
According to law number 12.715/2012, Brazilian government instituted guidelines for a program named Inovar-Auto. In this context, energy efficiency is a survival requirement for Brazilian automotive industry from September 2016. As proposed by law, energy efficiency is not going to be calculated by models only. It is going to be calculated by the whole universe of new vehicles registered. In this scenario, the composition of vehicles sold in market will be a key factor on profits of each automaker. Energy efficiency and its consequences should be taken into consideration in all of its aspects. In this scenario, emerges the following question: which is the efficiency curve of one automaker for long term, allowing them to adequate to rules, keep balancing on investment in technologies, increasing energy efficiency without affecting competitiveness of product lineup? Among several variables to be considered, one can highlight the analysis of manufacturing costs, customer value perception and market share, which characterizes this problem as a multi-criteria decision-making. To tackle the energy efficiency problem required by legislation, this paper proposes a framework of multi-criteria decision-making. The proposed framework combines Delphi group and Analytic Hierarchy Process to identify suitable alternatives for automakers to incorporate in main Brazilian vehicle segments. A forecast model based on artificial neural networks was used to estimate vehicle sales demand to validate expected results. This approach is demonstrated with a real case study using public vehicles sales data of Brazilian automakers and public energy efficiency data.