972 resultados para Time measurements
Resumo:
Goldsmiths'-Kress no. 20637.27.
Resumo:
Flow cytometry, in combination with advances in bead coding technologies, is maturing as a powerful high-throughput approach for analyzing molecular interactions. Applications of this technology include antibody assays and single nucleotide polymorphism mapping. This review describes the recent development of a microbead flow cytometric approach to analyze RNA-protein interactions and discusses emerging bead coding strategies that together will allow genome-wide identification of RNA-protein complexes. The microbead flow cytometric approach is flexible and provides new opportunities for functional genomic studies and small-molecule screening.
Resumo:
Some of the problems arising from the inherent instability of emulsions are discussed. Aspects of emulsion stability are described and particular attention is given to the influence of the chemical nature of the dispersed phase on adsorbed film structure and stability, Emulsion stability has been measured by a photomicrographic technique. Electrophoresis, interfacial tension and droplet rest-time data were also obtained. Emulsions were prepared using a range of oils, including aliphatic and aromatic hydrocarbons, dispersed In a solution of sodium dodecyl sulphate. In some cases a small amount of alkane or alkanol was incorporated into the oil phase. In general the findings agree with the classical view that the stability of oil-in-water emulsions is favoured by a closely packed interfacial film and appreciable electric charge on the droplets. The inclusion of non-ionic alcohol leads to enhanced stability, presumably owing to the formation of a "mixed" interfacial film which is more closely packed and probably more coherent than that of the anionic surfactant alone. In some instances differences in stability cannot he accounted for simply by differences in interfacial adsorption or droplet charge. Alternative explanations are discussed and it is postulated that the coarsening of emulsions may occur not only hy coalescence but also through the migration of oil from small droplets to larger ones by molecular diffusion. The viability of using the coalescence rates of droplets at a plane interface as a guide to emulsion stability has been researched. The construction of a suitable apparatus and the development of a standard testing procedure are described. Coalescence-time distributions may be correlated by equations similar to those presented by other workers, or by an analysis based upon the log-normal function. Stability parameters for a range of oils are discussed in terms of differences in film drainage and the natl1re of the interfacial film. Despite some broad correlations there is generally poor agreement between droplet and emulsion stabilities. It is concluded that hydrodynamic factors largely determine droplet stability in the systems studied. Consequently droplet rest-time measurements do not provide a sensible indication of emulsion stability,
Resumo:
Ultra-long mode-locked lasers are known to be strongly influenced by nonlinear interactions in long cavities that results in noise-like stochastic pulses. Here, by using an advanced technique of real-time measurements of both temporal and spatial (over round-trips) intensity evolution, we reveal an existence of wide range of generation regimes. Different kinds of coherent structures including dark and grey solitons and rogue-like bright coherent structures are observed as well as interaction between them are revealed.
Resumo:
AMS Subj. Classification: 92C30
Resumo:
Diabetes mellitus (DM) is a metabolic disorder which is characterised by hyperglycaemia resulting from defects in insulin secretion, insulin action or both. The long-term specific effects of DM include the development of retinopathy, nephropathy and neuropathy. Cardiac disease, peripheral arterial and cerebrovascular disease are also known to be linked with DM. Type 1 diabetes mellitus (T1DM) accounts for approximately 10% of all individuals with DM, and insulin therapy is the only available treatment. Type 2 diabetes mellitus (T2DM) accounts for 90% of all individuals with DM. Diet, exercise, oral hypoglycaemic agents and occasionally exogenous insulin are used to manage T2DM. The diagnosis of DM is made where the glycated haemoglobin (HbA1c) percentage is greater than 6.5%. Pattern-reversal visual evoked potential (PVEP) testing is an objective means of evaluating impulse conduction along the central nervous pathways. Increased peak time of the visual P100 waveform is an expression of structural damage at the level of myelinated optic nerve fibres. This was an observational cross sectional study. The participants were grouped into two phases. Phase 1, the control group, consisted of 30 healthy non-diabetic participants. Phase 2 comprised of 104 diabetic participants of whom 52 had an HbA1c greater than 10% (poorly controlled DM) and 52 whose HbA1c was 10% and less (moderately controlled DM). The aim of this study was to firstly observe the possible association between glycated haemoglobin levels and P100 peak time of pattern-reversal visual evoked potentials (PVEPs) in DM. Secondly, to assess whether the central nervous system (CNS) and in particular visual function is affected by type and/or duration of DM. The cut-off values to define P100 peak time delay was calculated as the mean P100 peak time plus 2.5 X standard deviations as measured for the non-diabetic control group, and were 110.64 ms for the right eye. The proportion of delayed P100 peak time amounted to 38.5% for both diabetic groups, thus the poorly controlled group (HbA1c > 10%) did not pose an increased risk for delayed P100 peak time, relative to the moderately controlled group (HbA1c ≤ 10%). The P100 PVEP results for this study, do however, reflect significant delay (p < 0.001) of the DM group as compared to the non-diabetic group; thus, subclincal neuropathy of the CNS occurs in 38.5% of cases. The duration of DM and type of DM had no influence on the P100 peak time measurements.
Resumo:
This paper, using detailed time measurements of patients complemented by interviews with hospital management and staff, examines three facets of an emergency room's (ER) operational performance: (1) effectiveness of the triage system in rationing patient treatment; (2) factors influencing ER's operational performance in general and the trade-offs in flow times, inventory levels (that is the number of patients waiting in the system), and resource utilization; (3) the impacts of potential process and staffing changes to improve the ER's performance. Specifically, the paper discusses four proposals for streamlining the patient flow: establishing designated tracks (fast track, diagnostic track), creating a holding area for certain type of patients, introducing a protocol that would reduce the load on physicians by allowing a registered nurse to order testing and treatment for some patients, and potentially and in the longer term, moving from non-ER specialist physicians to ER specialists. The paper's findings are based on analyzing the paths and flow times of close to two thousand patients in the emergency room of the Medical Center of Leeuwarden (MCL), The Netherlands. Using exploratory data analysis the paper presents generalizable findings about the impacts of various factors on ER's lead-time performance and shows how the proposals fit with well-documented process improvement theories. © 2010 Elsevier B.V. All rights reserved.
Resumo:
We present recent results on measurements of intensity spatio-temporal dynamics in passively mode-locked fibre laser. We experimentally uncover distinct, dynamic and stable spatio-temporal generation regimes of various stochasticity and periodicity properties in though-to-be unstable laser. We present a method to distinguish various types of generated coherent structures, including rogue and shock waves, within the radiation by means of introducing of intensity ACF evolution map. We also discuss how the spectral dynamics could be measured in fiber lasers generating irregular train of pulses of quasi-CW generation via combination of heterodyning and intensity spatio-temporal measurement concept.
Resumo:
For an erbium-doped fiber laser mode-locked by carbon nanotubes, we demonstrate experimentally and theoretically a new type of the vector rogue waves emerging as a result of the chaotic evolution of the trajectories between two orthogonal states of polarization on the Poincare sphere. In terms of fluctuation induced phenomena, by tuning polarization controller for the pump wave and in-cavity polarization controller, we are able to control the Kramers time, i.e. the residence time of the trajectory in vicinity of each orthogonal state of polarization, and so can cause the rare events satisfying rogue wave criteria and having the form of transitions from the state with the long residence time to the state with a short residence time.
Resumo:
A água é um recurso essencial e escasso, como tal, é necessário encontrar medidas que permitam o seu uso de modo sustentável e garantam a proteção do meio ambiente. Devido a esta crescente preocupação assiste-se a um movimento legislativo, nacional e internacional, no sentido de garantir o desenvolvimento sustentável. Surge assim, a Diretiva Quadro da Água e a Lei da Água, que é complementada com legislação diversa. Como elemento constituinte do ciclo urbano da água, os Sistemas de Abastecimento têm sofrido evoluções nem sempre adequadas. É neste contexto que, em Portugal, nascem as diversas ferramentas para a melhoria da gestão dos recursos hídricos. As Entidades Gestoras têm como finalidade a gestão eficiente do bem água, e dispõe de dois importantes instrumentos, o Programa Nacional para o Uso Eficiente da Água e o Guia para o “controlo de perdas de água em sistemas públicos de adução e distribuição”(ERSAR). Esta Gestão passa, não só pela abordagem da problemática das perdas de água, reais e aparentes, como também pela análise do comportamento que origina o desperdício. A APA, enquanto entidade gestora, procura maximizar a eficiência do seu sistema de abastecimento, para tal, foram aplicadas as ferramentas propostas pelo ERSAR. Concluindo-se que este sistema tem um total de perdas de água de 34%, devendo-se estas perdas essencialmente ao envelhecido parque de contadores e perdas nos ramais de distribuição (teórico). As perdas comerciais representam cerca de 69%, o que revela que os volumes de água não faturados (medidos ou não) são muito elevados. Por outro lado, a realização do cálculo do Balanço Hídrico e dos índices de desempenho permitem classificar a performance do sistema de abastecimento e compará-la com os seus objetivos de gestão. Atendendo ao volume de água perdido nos ramais, foram efetuadas medições noturnas, verificando-se que no Porto de Pesca Costeira existe um volume de água escoado não justificado. Neste sentido, elaborou-se um plano de ação para aumentar a eficiência do sistema, ou seja, reduzir as perdas totais de 34% para 15%.
Resumo:
New methods of nuclear fuel and cladding characterization must be developed and implemented to enhance the safety and reliability of nuclear power plants. One class of such advanced methods is aimed at the characterization of fuel performance by performing minimally intrusive in-core, real time measurements on nuclear fuel on the nanometer scale. Nuclear power plants depend on instrumentation and control systems for monitoring, control and protection. Traditionally, methods for fuel characterization under irradiation are performed using a “cook and look” method. These methods are very expensive and labor-intensive since they require removal, inspection and return of irradiated samples for each measurement. Such fuel cladding inspection methods investigate oxide layer thickness, wear, dimensional changes, ovality, nuclear fuel growth and nuclear fuel defect identification. These methods are also not suitable for all commercial nuclear power applications as they are not always available to the operator when needed. Additionally, such techniques often provide limited data and may exacerbate the phenomena being investigated. This thesis investigates a novel, nanostructured sensor based on a photonic crystal design that is implemented in a nuclear reactor environment. The aim of this work is to produce an in-situ radiation-tolerant sensor capable of measuring the deformation of a nuclear material during nuclear reactor operations. The sensor was fabricated on the surface of nuclear reactor materials (specifically, steel and zirconium based alloys). Charged-particle and mixed-field irradiations were both performed on a newly-developed “pelletron” beamline at Idaho State University's Research and Innovation in Science and Engineering (RISE) complex and at the University of Maryland's 250 kW Training Reactor (MUTR). The sensors were irradiated to 6 different fluences (ranging from 1 to 100 dpa), followed by intensive characterization using focused ion beam (FIB), transmission electron microscopy (TEM) and scanning electron microscopy (SEM) to investigate the physical deformation and microstructural changes between different fluence levels, to provide high-resolution information regarding the material performance. Computer modeling (SRIM/TRIM) was employed to simulate damage to the sensor as well as to provide significant information concerning the penetration depth of the ions into the material.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
The use of infrared burners in industrial applications has many advantages in terms of technical-operational, for example, uniformity in the heat supply in the form of radiation and convection, with greater control of emissions due to the passage of exhaust gases through a macro-porous ceramic bed. This paper presents an infrared burner commercial, which was adapted an experimental ejector, capable of promoting a mixture of liquefied petroleum gas (LPG) and glycerin. By varying the percentage of dual-fuel, it was evaluated the performance of the infrared burner by performing an energy balance and atmospheric emissions. It was introduced a temperature controller with thermocouple modulating two-stage (low heat / high heat), using solenoid valves for each fuel. The infrared burner has been tested and tests by varying the amount of glycerin inserted by a gravity feed system. The method of thermodynamic analysis to estimate the load was used an aluminum plate located at the exit of combustion gases and the distribution of temperatures measured by a data acquisition system which recorded real-time measurements of the thermocouples attached. The burner had a stable combustion at levels of 15, 20 and 25% of adding glycerin in mass ratio of LPG gas, increasing the supply of heat to the plate. According to data obtained showed that there was an improvement in the efficiency of the 1st Law of infrared burner with increasing addition of glycerin. The emission levels of greenhouse gases produced by combustion (CO, NOx, SO2 and HC) met the environmental limits set by resolution No. 382/2006 of CONAMA
Resumo:
In high-velocity open channel flows, the measurements of air-water flow properties are complicated by the strong interactions between the flow turbulence and the entrained air. In the present study, an advanced signal processing of traditional single- and dual-tip conductivity probe signals is developed to provide further details on the air-water turbulent level, time and length scales. The technique is applied to turbulent open channel flows on a stepped chute conducted in a large-size facility with flow Reynolds numbers ranging from 3.8 E+5 to 7.1 E+5. The air water flow properties presented some basic characteristics that were qualitatively and quantitatively similar to previous skimming flow studies. Some self-similar relationships were observed systematically at both macroscopic and microscopic levels. These included the distributions of void fraction, bubble count rate, interfacial velocity and turbulence level at a macroscopic scale, and the auto- and cross-correlation functions at the microscopic level. New correlation analyses yielded a characterisation of the large eddies advecting the bubbles. Basic results included the integral turbulent length and time scales. The turbulent length scales characterised some measure of the size of large vortical structures advecting air bubbles in the skimming flows, and the data were closely related to the characteristic air-water depth Y90. In the spray region, present results highlighted the existence of an upper spray region for C > 0.95 to 0.97 in which the distributions of droplet chord sizes and integral advection scales presented some marked differences with the rest of the flow.
Resumo:
It is not possible to make measurements of the phase of an optical mode using linear optics without introducing an extra phase uncertainty. This extra phase variance is quite large for heterodyne measurements, however it is possible to reduce it to the theoretical limit of log (n) over bar (4 (n) over bar (2)) using adaptive measurements. These measurements are quite sensitive to experimental inaccuracies, especially time delays and inefficient detectors. Here it is shown that the minimum introduced phase variance when there is a time delay of tau is tau/(8 (n) over bar). This result is verified numerically, showing that the phase variance introduced approaches this limit for most of the adaptive schemes using the best final phase estimate. The main exception is the adaptive mark II scheme with simplified feedback, which is extremely sensitive to time delays. The extra phase variance due to time delays is considered for the mark I case with simplified feedback, verifying the tau /2 result obtained by Wiseman and Killip both by a more rigorous analytic technique and numerically.