937 resultados para Single frequency
Resumo:
Transition induced by an isolated streamwise vortex embedded in a flat plate boundary layer was studied experimentally. The vortex was created by a gentle hill with a Gaussian profile that spanned on half of the width of a flat plate mounted in a low turbulence wind tunnel. PIV and hot-wire anemometry data were taken. Transition occurs as a non-inclined shear layer breaks up into a sequence of vortices, close to the boundary layer edge. The passing frequency of these vortices scales with square of the freestream velocity, similar to that in single-roughness induced transition. Quadrant analysis of streamwise and wall-normal velocity fluctuations show large ejection events in the outer layer. (C) 2015 Elsevier Inc. All rights reserved.
Resumo:
We report an anomalous re-entrant glassy magnetic phase in (l00) oriented ferromagnetic LaMn0.5Co0.5O3 single crystals. The characterization is fortified with conventional magnetometry, like linear as-well-as non-linear ac susceptibility and specific heat. As the sample is cooled below the ferromagnetic transition temperature, it reenters a glassy magnetic phase whose dynamics have little resemblance with the conventional response. The glassy transition shifts to a higher temperature with increasing frequency of the applied ac field. But it does not respond to the dc biasing or memory experiment. Specific heat as well as non-linear ac susceptibility data also do not relate to the conventional glassy response. Unusually low magnetic entropy indicates the lack of long range magnetic ordering. The results demonstrate that the glassy phase in LaMn0.5Co0.5O3 is not due to any of the known conventional origins. We infer that the competing ferromagnetic and antiferromagnetic interaction due to high B-site disorder is responsible for this anomalous re-entrant glassy phase. (C) 2016 AIP Publishing LLC.
Resumo:
We report an anomalous re-entrant glassy magnetic phase in (l00) oriented ferromagnetic LaMn0.5Co0.5O3 single crystals. The characterization is fortified with conventional magnetometry, like linear as-well-as non-linear ac susceptibility and specific heat. As the sample is cooled below the ferromagnetic transition temperature, it reenters a glassy magnetic phase whose dynamics have little resemblance with the conventional response. The glassy transition shifts to a higher temperature with increasing frequency of the applied ac field. But it does not respond to the dc biasing or memory experiment. Specific heat as well as non-linear ac susceptibility data also do not relate to the conventional glassy response. Unusually low magnetic entropy indicates the lack of long range magnetic ordering. The results demonstrate that the glassy phase in LaMn0.5Co0.5O3 is not due to any of the known conventional origins. We infer that the competing ferromagnetic and antiferromagnetic interaction due to high B-site disorder is responsible for this anomalous re-entrant glassy phase. (C) 2016 AIP Publishing LLC.
Resumo:
This paper details a bulk acoustic mode resonator fabricated in single-crystal silicon with a quality factor of 15 000 in air, and over a million below 10 mTorr at a resonant frequency of 2.18 MHz. The resonator is a square plate that is excited in the square-extensional mode and has been fabricated in a commercial foundry silicon-on-insulator (SOI) MEMS process through MEMSCAP. This paper also presents a simple method of extracting resonator parameters from raw measurements heavily buried in electrical feedthrough. Its accuracy has been demonstrated through a comparison between extracted motional resistance values measured at different voltage biases and those predicted from an analytical model. Finally, a method of substantially cancelling electrical feedthrough through system-level electronic implementation is also introduced. © 2008 IOP Publishing Ltd.
Resumo:
A single-crystal silicon resonant bulk acoustic mass sensor with a measured resolution of 125 pg cm2 is presented. The mass sensor comprises a micromachined silicon plate that is excited in the square-extensional bulk acoustic resonant mode at a frequency of 2.182 MHz, with a quality factor exceeding 106. The mass sensor has a measured mass to frequency shift sensitivity of 132 Hz cm2 μg. The resonator element is embedded in a feedback loop of an electronic amplifier to implement an oscillator with a short term frequency stability of better than 7 ppb at an operating pressure of 3.8 mTorr. © 2007 American Institute of Physics.
Resumo:
This paper reports the design and electrical characterization of a micromechanical disk resonator fabricated in single crystal silicon using a foundry SOI micromachining process. The microresonator has been selectively excited in the radial extensional and the wine glass modes by reversing the polarity of the DC bias voltage applied on selected drive electrodes around the resonant structure. The quality factor of the resonator vibrating in the radial contour mode was 8000 at a resonant frequency of 6.34 MHz at pressure below 10 mTorr vacuum. The highest measured quality factor of the resonator in the wine glass resonant mode was 1.9 × 106 using a DC bias voltage of 20 V at about the same pressure in vacuum; the resonant frequency was 5.43 MHz and the lowest motional resistance measured was approximately 17 kΩ using a DC bias voltage of 60 V applied across 2.7 μm actuation gaps. This corresponds to a resonant frequency-quality factor (f-Q) product of 1.02 × 1013, among the highest reported for single crystal silicon microresonators, and on par with the best quartz crystal resonators. The quality factor for the wine glass mode in air was approximately 10,000. © 2009 Elsevier B.V. All rights reserved.
Resumo:
ENGLISH: A two-stage sampling design is used to estimate the variances of the numbers of yellowfin in different age groups caught in the eastern Pacific Ocean. For purse seiners, the primary sampling unit (n) is a brine well containing fish from a month-area stratum; the number of fish lengths (m) measured from each well are the secondary units. The fish cannot be selected at random from the wells because of practical limitations. The effects of different sampling methods and other factors on the reliability and precision of statistics derived from the length-frequency data were therefore examined. Modifications are recommended where necessary. Lengths of fish measured during the unloading of six test wells revealed two forms of inherent size stratification: 1) short-term disruptions of existing pattern of sizes, and 2) transition zones between long-term trends in sizes. To some degree, all wells exhibited cyclic changes in mean size and variance during unloading. In half of the wells, it was observed that size selection by the unloaders induced a change in mean size. As a result of stratification, the sequence of sizes removed from all wells was non-random, regardless of whether a well contained fish from a single set or from more than one set. The number of modal sizes in a well was not related to the number of sets. In an additional well composed of fish from several sets, an experiment on vertical mixing indicated that a representative sample of the contents may be restricted to the bottom half of the well. The contents of the test wells were used to generate 25 simulated wells and to compare the results of three sampling methods applied to them. The methods were: (1) random sampling (also used as a standard), (2) protracted sampling, in which the selection process was extended over a large portion of a well, and (3) measuring fish consecutively during removal from the well. Repeated sampling by each method and different combinations indicated that, because the principal source of size variation occurred among primary units, increasing n was the most effective way to reduce the variance estimates of both the age-group sizes and the total number of fish in the landings. Protracted sampling largely circumvented the effects of size stratification, and its performance was essentially comparable to that of random sampling. Sampling by this method is recommended. Consecutive-fish sampling produced more biased estimates with greater variances. Analysis of the 1988 length-frequency samples indicated that, for age groups that appear most frequently in the catch, a minimum sampling frequency of one primary unit in six for each month-area stratum would reduce the coefficients of variation (CV) of their size estimates to approximately 10 percent or less. Additional stratification of samples by set type, rather than month-area alone, further reduced the CV's of scarce age groups, such as the recruits, and potentially improved their accuracy. The CV's of recruitment estimates for completely-fished cohorts during the 198184 period were in the vicinity of 3 to 8 percent. Recruitment estimates and their variances were also relatively insensitive to changes in the individual quarterly catches and variances, respectively, of which they were composed. SPANISH: Se usa un diseño de muestreo de dos etapas para estimar las varianzas de los números de aletas amari11as en distintos grupos de edad capturados en el Océano Pacifico oriental. Para barcos cerqueros, la unidad primaria de muestreo (n) es una bodega de salmuera que contenía peces de un estrato de mes-área; el numero de ta11as de peces (m) medidas de cada bodega es la unidad secundaria. Limitaciones de carácter practico impiden la selección aleatoria de peces de las bodegas. Por 10 tanto, fueron examinados los efectos de distintos métodos de muestreo y otros factores sobre la confiabilidad y precisión de las estadísticas derivadas de los datos de frecuencia de ta11a. Se recomiendan modificaciones donde sean necesarias. Las ta11as de peces medidas durante la descarga de seis bodegas de prueba revelaron dos formas de estratificación inherente por ta11a: 1) perturbaciones a corto plazo en la pauta de ta11as existente, y 2) zonas de transición entre las tendencias a largo plazo en las ta11as. En cierto grado, todas las bodegas mostraron cambios cíclicos en ta11a media y varianza durante la descarga. En la mitad de las bodegas, se observo que selección por ta11a por los descargadores indujo un cambio en la ta11a media. Como resultado de la estratificación, la secuencia de ta11as sacadas de todas las bodegas no fue aleatoria, sin considerar si una bodega contenía peces de un solo lance 0 de mas de uno. El numero de ta11as modales en una bodega no estaba relacionado al numero de lances. En una bodega adicional compuesta de peces de varios lances, un experimento de mezcla vertical indico que una muestra representativa del contenido podría estar limitada a la mitad inferior de la bodega. Se uso el contenido de las bodegas de prueba para generar 25 bodegas simuladas y comparar los resultados de tres métodos de muestreo aplicados a estas. Los métodos fueron: (1) muestreo aleatorio (usado también como norma), (2) muestreo extendido, en el cual el proceso de selección fue extendido sobre una porción grande de una bodega, y (3) medición consecutiva de peces durante la descarga de la bodega. EI muestreo repetido con cada método y distintas combinaciones de n y m indico que, puesto que la fuente principal de variación de ta11a ocurría entre las unidades primarias, aumentar n fue la manera mas eficaz de reducir las estimaciones de la varianza de las ta11as de los grupos de edad y el numero total de peces en los desembarcos. El muestreo extendido evito mayormente los efectos de la estratificación por ta11a, y su desempeño fue esencialmente comparable a aquel del muestreo aleatorio. Se recomienda muestrear con este método. El muestreo de peces consecutivos produjo estimaciones mas sesgadas con mayores varianzas. Un análisis de las muestras de frecuencia de ta11a de 1988 indico que, para los grupos de edad que aparecen con mayor frecuencia en la captura, una frecuencia de muestreo minima de una unidad primaria de cada seis para cada estrato de mes-área reduciría los coeficientes de variación (CV) de las estimaciones de ta11a correspondientes a aproximadamente 10% 0 menos. Una estratificación adicional de las muestras por tipo de lance, y no solamente mes-área, redujo aun mas los CV de los grupos de edad escasos, tales como los reclutas, y mejoró potencialmente su precisión. Los CV de las estimaciones del reclutamiento para las cohortes completamente pescadas durante 1981-1984 fueron alrededor de 3-8%. Las estimaciones del reclutamiento y sus varianzas fueron también relativamente insensibles a cambios en las capturas de trimestres individuales y las varianzas, respectivamente, de las cuales fueron derivadas. (PDF contains 70 pages)
Resumo:
Understanding fluctuations in tropical cyclone activity along United States shores and abroad becomes increasingly important as coastal managers and planners seek to save lives, mitigate damage, and plan for resilience in the face of changing storminess and sea-level rise. Tropical cyclone activity has long been of concern to coastal areas as they bring strong winds, heavy rains, and high seas. Given projections of a warming climate, current estimates suggest that not only will tropical cyclones increase in frequency, but also in intensity (maximum sustained winds and minimum central pressures). An understanding of what has happened historically is an important step in identifying potential future changes in tropical cyclone frequency and intensity. The ability to detect such changes depends on a consistent and reliable global tropical cyclone dataset. Until recently no central repository for historical tropical cyclone data existed. To fill this need, the International Best Track Archive for Climate Stewardship (IBTrACS) dataset was developed to collect all known global historical tropical cyclone data into a single source for dissemination. With this dataset, a global examination of changes in tropical cyclone frequency and intensity can be performed. Caveats apply to any historical tropical cyclone analysis however, as the data contributed to the IBTrACS archive from various tropical cyclone warning centers is still replete with biases that may stem from operational changes, inhomogeneous monitoring programs, and time discontinuities. A detailed discussion of the difficulties in detecting trends using tropical cyclone data can be found in Landsea et al. 2006. The following sections use the IBTrACS dataset to show the global spatial variability of tropical cyclone frequency and intensity. Analyses will show where the strongest storms typically occur, the regions with the highest number of tropical cyclones per decade, and the locations of highest average maximum wind speeds. (PDF contains 3 pages)
Resumo:
Being able to detect a single molecule without the use of labels has been a long standing goal of bioengineers and physicists. This would simplify applications ranging from single molecular binding studies to those involving public health and security, improved drug screening, medical diagnostics, and genome sequencing. One promising technique that has the potential to detect single molecules is the microtoroid optical resonator. The main obstacle to detecting single molecules, however, is decreasing the noise level of the measurements such that a single molecule can be distinguished from background. We have used laser frequency locking in combination with balanced detection and data processing techniques to reduce the noise level of these devices and report the detection of a wide range of nanoscale objects ranging from nanoparticles with radii from 100 to 2.5 nm, to exosomes, ribosomes, and single protein molecules (mouse immunoglobulin G and human interleukin-2). We further extend the exosome results towards creating a non-invasive tumor biopsy assay. Our results, covering several orders of magnitude of particle radius (100 nm to 2 nm), agree with the `reactive' model prediction for the frequency shift of the resonator upon particle binding. In addition, we demonstrate that molecular weight may be estimated from the frequency shift through a simple formula, thus providing a basis for an ``optical mass spectrometer'' in solution. We anticipate that our results will enable many applications, including more sensitive medical diagnostics and fundamental studies of single receptor-ligand and protein-protein interactions in real time. The thesis summarizes what we have achieved thus far and shows that the goal of detecting a single molecule without the use of labels can now be realized.
Resumo:
We propose a simple single-layer magnetic microtrap configuration which can trap an array of magnetically-trapped Bose-Einstein condensate. The configuration consists of two series of parallel wires perpendicular to each other and all of the crossing points are cut off for maintaining the uniformity of the current. We analyse the trapping potential, the position of trapping centres and the uniformity of the array of the traps. The trapping depth and trapping frequency with different parameters are also calculated. Lastly, the effect of the cut-off crossing points, dissipate power, chip production are introduced concisely.
Resumo:
The response of linear, viscous damped systems to excitations having time-varying frequency is the subject of exact and approximate analyses, which are supplemented by an analog computer study of single degree of freedom system response to excitations having frequencies depending linearly and exponentially on time.
The technique of small perturbations and the methods of stationary phase and saddle-point integration, as well as a novel bounding procedure, are utilized to derive approximate expressions characterizing the system response envelope—particularly near resonances—for the general time-varying excitation frequency.
Descriptive measurements of system resonant behavior recorded during the course of the analog study—maximum response, excitation frequency at which maximum response occurs, and the width of the response peak at the half-power level—are investigated to determine dependence upon natural frequency, damping, and the functional form of the excitation frequency.
The laboratory problem of determining the properties of a physical system from records of its response to excitations of this class is considered, and the transient phenomenon known as “ringing” is treated briefly.
It is shown that system resonant behavior, as portrayed by the above measurements and expressions, is relatively insensitive to the specifics of the excitation frequency-time relation and may be described to good order in terms of parameters combining system properties with the time derivative of excitation frequency evaluated at resonance.
One of these parameters is shown useful for predicting whether or not a given excitation having a time-varying frequency will produce strong or subtle changes in the response envelope of a given system relative to the steady-state response envelope. The parameter is shown, additionally, to be useful for predicting whether or not a particular response record will exhibit the “ringing” phenomenon.
Resumo:
The first chapter of this thesis deals with automating data gathering for single cell microfluidic tests. The programs developed saved significant amounts of time with no loss in accuracy. The technology from this chapter was applied to experiments in both Chapters 4 and 5.
The second chapter describes the use of statistical learning to prognose if an anti-angiogenic drug (Bevacizumab) would successfully treat a glioblastoma multiforme tumor. This was conducted by first measuring protein levels from 92 blood samples using the DNA-encoded antibody library platform. This allowed the measure of 35 different proteins per sample, with comparable sensitivity to ELISA. Two statistical learning models were developed in order to predict whether the treatment would succeed. The first, logistic regression, predicted with 85% accuracy and an AUC of 0.901 using a five protein panel. These five proteins were statistically significant predictors and gave insight into the mechanism behind anti-angiogenic success/failure. The second model, an ensemble model of logistic regression, kNN, and random forest, predicted with a slightly higher accuracy of 87%.
The third chapter details the development of a photocleavable conjugate that multiplexed cell surface detection in microfluidic devices. The method successfully detected streptavidin on coated beads with 92% positive predictive rate. Furthermore, chambers with 0, 1, 2, and 3+ beads were statistically distinguishable. The method was then used to detect CD3 on Jurkat T cells, yielding a positive predictive rate of 49% and false positive rate of 0%.
The fourth chapter talks about the use of measuring T cell polyfunctionality in order to predict whether a patient will succeed an adoptive T cells transfer therapy. In 15 patients, we measured 10 proteins from individual T cells (~300 cells per patient). The polyfunctional strength index was calculated, which was then correlated with the patient's progress free survival (PFS) time. 52 other parameters measured in the single cell test were correlated with the PFS. No statistical correlator has been determined, however, and more data is necessary to reach a conclusion.
Finally, the fifth chapter talks about the interactions between T cells and how that affects their protein secretion. It was observed that T cells in direct contact selectively enhance their protein secretion, in some cases by over 5 fold. This occurred for Granzyme B, Perforin, CCL4, TNFa, and IFNg. IL- 10 was shown to decrease slightly upon contact. This phenomenon held true for T cells from all patients tested (n=8). Using single cell data, the theoretical protein secretion frequency was calculated for two cells and then compared to the observed rate of secretion for both two cells not in contact, and two cells in contact. In over 90% of cases, the theoretical protein secretion rate matched that of two cells not in contact.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.