998 resultados para Contamination (Technology)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I.

In recent years, backscattering spectrometry has become an important tool for the analysis of thin films. An inherent limitation, though, is the loss of depth resolution due to energy straggling of the beam. To investigate this, energy straggling of 4He has been measured in thin films of Ni, Al, Au and Pt. Straggling is roughly proportional to square root of thickness, appears to have a slight energy dependence and generally decreases with decreasing atomic number of the adsorber. The results are compared with predictions of theory and with previous measurements. While Ni measurements are in fair agreement with Bohr's theory, Al measurements are 30% above and Au measurements are 40% below predicted values. The Au and Pt measurements give straggling values which are close to one another.

Part II.

MeV backscattering spectrometry and X-ray diffraction are used to investigate the behavior of sputter-deposited Ti-W mixed films on Si substrates. During vacuum anneals at temperatures near 700°C for several hours, the metallization layer reacts with the substrate. Backscattering analysis shows that the resulting compound layer is uniform in composition and contains Ti, Wand Si. The Ti:W ratio in the compound corresponds to that of the deposited metal film. X-ray analyses with Reed and Guinier cameras reveal the presence of the ternary TixW(1-x)Si2 compound. Its composition is unaffected by oxygen contamination during annealing, but the reaction rate is affected. The rate measured on samples with about 15% oxygen contamination after annealing is linear, of the order of 0.5 Å per second at 725°C, and depends on the crystallographic orientation of the substrate and the dc bias during sputter-deposition of the Ti-W film.

Au layers of about 1000 Å thickness were deposited onto unreacted Ti-W films on Si. When annealed at 400°C these samples underwent a color change,and SEM micrographs of the samples showed that an intricate pattern of fissures which were typically 3µm wide had evolved. Analysis by electron microprobe revealed that Au had segregated preferentially into the fissures. This result suggests that Ti-W is not a barrier to Au-Si intermixing at 400°C.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work contains 4 topics dealing with the properties of the luminescence from Ge.

The temperature, pump-power and time dependences of the photoluminescence spectra of Li-, As-, Ga-, and Sb-doped Ge crystals were studied. For impurity concentrations less than about 1015cm-3, emissions due to electron-hole droplets can clearly be identified. For impurity concentrations on the order of 1016cm-3, the broad lines in the spectra, which have previously been attributed to the emission from the electron-hole-droplet, were found to possess pump-power and time dependent line shape. These properties show that these broad lines cannot be due to emission of electron-hole-droplets alone. We interpret these lines to be due to a combination of emissions from (1) electron-hole- droplets, (2) broadened multiexciton complexes, (3) broadened bound-exciton, and (4) plasma of electrons and holes. The properties of the electron-hole-droplet in As-doped Ge were shown to agree with theoretical predictions.

The time dependences of the luminescence intensities of the electron-hole-droplet in pure and doped Ge were investigated at 2 and 4.2°K. The decay of the electron-hole-droplet in pure Ge at 4.2°K was found to be pump-power dependent and too slow to be explained by the widely accepted model due to Pokrovskii and Hensel et al. Detailed study of the decay of the electron-hole-droplets in doped Ge were carried out for the first time, and we find no evidence of evaporation of excitons by electron-hole-droplets at 4.2°K. This doped Ge result is unexplained by the model of Pokrovskii and Hensel et al. It is shown that a model based on a cloud of electron-hole-droplets generated in the crystal and incorporating (1) exciton flow among electron-hole-droplets in the cloud and (2) exciton diffusion away from the cloud is capable of explaining the observed results.

It is shown that impurities, introduced during device fabrication, can lead to the previously reported differences of the spectra of laser-excited high-purity Ge and electrically excited Ge double injection devices. By properly choosing the device geometry so as to minimize this Li contamination, it is shown that the Li concentration in double injection devices may be reduced to less than about 1015cm-3 and electrically excited luminescence spectra similar to the photoluminescence spectra of pure Ge may be produced. This proves conclusively that electron-hole-droplets may be created in double injection devices by electrical excitation.

The ratio of the LA- to TO-phonon-assisted luminescence intensities of the electron-hole-droplet is demonstrated to be equal to the high temperature limit of the same ratio of the exciton for Ge. This result gives one confidence to determine similar ratios for the electron-hole-droplet from the corresponding exciton ratio in semiconductors in which the ratio for the electron-hole-droplet cannot be determined (e.g., Si and GaP). Knowing the value of this ratio for the electron-hole-droplet, one can obtain accurate values of many parameters of the electron-hole-droplet in these semiconductors spectroscopically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The epoch of reionization remains one of the last uncharted eras of cosmic history, yet this time is of crucial importance, encompassing the formation of both the first galaxies and the first metals in the universe. In this thesis, I present four related projects that both characterize the abundance and properties of these first galaxies and uses follow-up observations of these galaxies to achieve one of the first observations of the neutral fraction of the intergalactic medium during the heart of the reionization era.

First, we present the results of a spectroscopic survey using the Keck telescopes targeting 6.3 < z < 8.8 star-forming galaxies. We secured observations of 19 candidates, initially selected by applying the Lyman break technique to infrared imaging data from the Wide Field Camera 3 (WFC3) onboard the Hubble Space Telescope (HST). This survey builds upon earlier work from Stark et al. (2010, 2011), which showed that star-forming galaxies at 3 < z < 6, when the universe was highly ionized, displayed a significant increase in strong Lyman alpha emission with redshift. Our work uses the LRIS and NIRSPEC instruments to search for Lyman alpha emission in candidates at a greater redshift in the observed near-infrared, in order to discern if this evolution continues, or is quenched by an increase in the neutral fraction of the intergalactic medium. Our spectroscopic observations typically reach a 5-sigma limiting sensitivity of < 50 AA. Despite expecting to detect Lyman alpha at 5-sigma in 7-8 galaxies based on our Monte Carlo simulations, we only achieve secure detections in two of 19 sources. Combining these results with a similar sample of 7 galaxies from Fontana et al. (2010), we determine that these few detections would only occur in < 1% of simulations if the intrinsic distribution was the same as that at z ~ 6. We consider other explanations for this decline, but find the most convincing explanation to be an increase in the neutral fraction of the intergalactic medium. Using theoretical models, we infer a neutral fraction of X_HI ~ 0.44 at z = 7.

Second, we characterize the abundance of star-forming galaxies at z > 6.5 again using WFC3 onboard the HST. This project conducted a detailed search for candidates both in the Hubble Ultra Deep Field as well as a number of additional wider Hubble Space Telescope surveys to construct luminosity functions at both z ~ 7 and 8, reaching 0.65 and 0.25 mag fainter than any previous surveys, respectively. With this increased depth, we achieve some of the most robust constraints on the Schechter function faint end slopes at these redshifts, finding very steep values of alpha_{z~7} = -1.87 +/- 0.18 and alpha_{z~8} = -1.94 +/- 0.23. We discuss these results in the context of cosmic reionization, and show that given reasonable assumptions about the ionizing spectra and escape fraction of ionizing photons, only half the photons needed to maintain reionization are provided by currently observable galaxies at z ~ 7-8. We show that an extension of the luminosity function down to M_{UV} = -13.0, coupled with a low level of star-formation out to higher redshift, can fit all available constraints on the ionization history of the universe.

Third, we investigate the strength of nebular emission in 3 < z < 5 star-forming galaxies. We begin by using the Infrared Array Camera (IRAC) onboard the Spitzer Space Telescope to investigate the strength of H alpha emission in a sample of 3.8 < z < 5.0 spectroscopically confirmed galaxies. We then conduct near-infrared observations of star-forming galaxies at 3 < z < 3.8 to investigate the strength of the [OIII] 4959/5007 and H beta emission lines from the ground using MOSFIRE. In both cases, we uncover near-ubiquitous strong nebular emission, and find excellent agreement between the fluxes derived using the separate methods. For a subset of 9 objects in our MOSFIRE sample that have secure Spitzer IRAC detections, we compare the emission line flux derived from the excess in the K_s band photometry to that derived from direct spectroscopy and find 7 to agree within a factor of 1.6, with only one catastrophic outlier. Finally, for a different subset for which we also have DEIMOS rest-UV spectroscopy, we compare the relative velocities of Lyman alpha and the rest-optical nebular lines which should trace the cites of star-formation. We find a median velocity offset of only v_{Ly alpha} = 149 km/s, significantly less than the 400 km/s observed for star-forming galaxies with weaker Lyman alpha emission at z = 2-3 (Steidel et al. 2010), and show that this decrease can be explained by a decrease in the neutral hydrogen column density covering the galaxy. We discuss how this will imply a lower neutral fraction for a given observed extinction of Lyman alpha when its visibility is used to probe the ionization state of the intergalactic medium.

Finally, we utilize the recent CANDELS wide-field, infra-red photometry over the GOODS-N and S fields to re-analyze the use of Lyman alpha emission to evaluate the neutrality of the intergalactic medium. With this new data, we derive accurate ultraviolet spectral slopes for a sample of 468 3 < z < 6 star-forming galaxies, already observed in the rest-UV with the Keck spectroscopic survey (Stark et al. 2010). We use a Bayesian fitting method which accurately accounts for contamination and obscuration by skylines to derive a relationship between the UV-slope of a galaxy and its intrinsic Lyman alpha equivalent width probability distribution. We then apply this data to spectroscopic surveys during the reionization era, including our own, to accurately interpret the drop in observed Lyman alpha emission. From our most recent such MOSFIRE survey, we also present evidence for the most distant galaxy confirmed through emission line spectroscopy at z = 7.62, as well as a first detection of the CIII]1907/1909 doublet at z > 7.

We conclude the thesis by exploring future prospects and summarizing the results of Robertson et al. (2013). This work synthesizes many of the measurements in this thesis, along with external constraints, to create a model of reionization that fits nearly all available constraints.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O óleo lubrificante mineral é amplamente utilizado no cenário mundial no funcionamento de máquinas e motores. No entanto, o ciclo de vida deste petro-derivado resulta na geração de um resíduo (óleo lubrificante usado), o qual é nocivo ao meio ambiente quando não descartado adequadamente ou reciclado. No Brasil, apesar das normas que tratam especificamente do armazenamento, recolhimento e destino de óleo lubrificante usado, grande parte do mesmo ainda é despejado diretamente no meio ambiente, sem qualquer tratamento, sendo de grande importância estudos que visem o entendimento dos processos e o desenvolvimento de tecnologias de remediação de áreas contaminadas por esse resíduo. O objetivo geral do presente trabalho foi conduzir estudos de tratabilidade de solo arenoso contaminado experimentalmente com 5% (m m-1 seco) de óleo lubrificante usado, através de duas diferentes estratégias de biorremediação: bioestímulo e bioaumento. Foram conduzidos dois experimentos. No primeiro, foi avaliada a atividade microbiana aeróbia na biodegradação do OLU através do método respirométrico de Bartha. No segundo, foram montados três biorreatores de fase sólida simulando biopilhas estáticas com aeração forçada, cada um contendo 125 kg de solo e 5% (m m-1 seco) de óleo lubrificante automotivo usado, os quais receberam como tratamento: bioestímulo por ajuste de pH e umidade (BIOSca); bioestímulo por ajuste de pH e umidade associado ao bioaumento com a adição de composto maduro (BIOA1ca) ; e bioestímulo por ajuste de pH e umidade associado ao bioaumento com a adição de composto jovem (BIOA2ca). Foram também montados três biorreatores de bancada simulando biopilhas estáticas sem aeração forçada, cada um contendo 3 kg de solo e 5% (m m-1) do mesmo contaminante, sendo que o primeiro continha solo sem contaminação - CONTsa, o segundo, solo contaminado com ajuste de pH BIOSsa e o terceiro, solo contaminado com adição de 0,3% de azida sódica - ABIOsa. Os tratamentos foram avaliados pela remoção de hidrocarbonetos totais de petróleo (HTPs) e após 120 dias de experimento obteve-se remoções de HTPs de 84,75%, 99,99% e 99,99%, com BIOS, BIOA1 e BIOA2, respectivamente, demonstrando que a estratégia de bioestímulo associada ao bioaumento foram promissoras na remediação do solo contaminado pelo óleo lubrificante usado. Os tratamentos que receberam composto (BIOA1 e BIOA2) não apresentaram diferenças quanto à remoção de HTPs, evidenciando que a fase de maturação dos compostos não apresentou influência na eficiência do processo. No entanto, verificou-se uma eficiência nos tratamentos que receberam composto quando comparado ao tratamento sem adição de composto

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The long- and short-period body waves of a number of moderate earthquakes occurring in central and southern California recorded at regional (200-1400 km) and teleseismic (> 30°) distances are modeled to obtain the source parameters-focal mechanism, depth, seismic moment, and source time history. The modeling is done in the time domain using a forward modeling technique based on ray summation. A simple layer over a half space velocity model is used with additional layers being added if necessary-for example, in a basin with a low velocity lid.

The earthquakes studied fall into two geographic regions: 1) the western Transverse Ranges, and 2) the western Imperial Valley. Earthquakes in the western Transverse Ranges include the 1987 Whittier Narrows earthquake, several offshore earthquakes that occurred between 1969 and 1981, and aftershocks to the 1983 Coalinga earthquake (these actually occurred north of the Transverse Ranges but share many characteristics with those that occurred there). These earthquakes are predominantly thrust faulting events with the average strike being east-west, but with many variations. Of the six earthquakes which had sufficient short-period data to accurately determine the source time history, five were complex events. That is, they could not be modeled as a simple point source, but consisted of two or more subevents. The subevents of the Whittier Narrows earthquake had different focal mechanisms. In the other cases, the subevents appear to be the same, but small variations could not be ruled out.

The recent Imperial Valley earthquakes modeled include the two 1987 Superstition Hills earthquakes and the 1969 Coyote Mountain earthquake. All are strike-slip events, and the second 1987 earthquake is a complex event With non-identical subevents.

In all the earthquakes studied, and particularly the thrust events, constraining the source parameters required modeling several phases and distance ranges. Teleseismic P waves could provide only approximate solutions. P_(nl) waves were probably the most useful phase in determining the focal mechanism, with additional constraints supplied by the SH waves when available. Contamination of the SH waves by shear-coupled PL waves was a frequent problem. Short-period data were needed to obtain the source time function.

In addition to the earthquakes mentioned above, several historic earthquakes were also studied. Earthquakes that occurred before the existence of dense local and worldwide networks are difficult to model due to the sparse data set. It has been noticed that earthquakes that occur near each other often produce similar waveforms implying similar source parameters. By comparing recent well studied earthquakes to historic earthquakes in the same region, better constraints can be placed on the source parameters of the historic events.

The Lompoc earthquake (M=7) of 1927 is the largest offshore earthquake to occur in California this century. By direct comparison of waveforms and amplitudes with the Coalinga and Santa Lucia Banks earthquakes, the focal mechanism (thrust faulting on a northwest striking fault) and long-period seismic moment (10^(26) dyne cm) can be obtained. The S-P travel times are consistent with an offshore location, rather than one in the Hosgri fault zone.

Historic earthquakes in the western Imperial Valley were also studied. These events include the 1942 and 1954 earthquakes. The earthquakes were relocated by comparing S-P and R-S times to recent earthquakes. It was found that only minor changes in the epicenters were required but that the Coyote Mountain earthquake may have been more severely mislocated. The waveforms as expected indicated that all the events were strike-slip. Moment estimates were obtained by comparing the amplitudes of recent and historic events at stations which recorded both. The 1942 event was smaller than the 1968 Borrego Mountain earthquake although some previous studies suggested the reverse. The 1954 and 1937 earthquakes had moments close to the expected value. An aftershock of the 1942 earthquake appears to be larger than previously thought.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While synoptic surveys in the optical and at high energies have revealed a rich discovery phase space of slow transients, a similar yield is still awaited in the radio. Majority of the past blind surveys, carried out with radio interferometers, have suffered from a low yield of slow transients, ambiguous transient classifications, and contamination by false positives. The newly-refurbished Karl G. Jansky Array (Jansky VLA) offers wider bandwidths for accurate RFI excision as well as substantially-improved sensitivity and survey speed compared with the old VLA. The Jansky VLA thus eliminates the pitfalls of interferometric transient search by facilitating sensitive, wide-field, and near-real-time radio surveys and enabling a systematic exploration of the dynamic radio sky. This thesis aims at carrying out blind Jansky VLA surveys for characterizing the radio variable and transient sources at frequencies of a few GHz and on timescales between days and years. Through joint radio and optical surveys, the thesis addresses outstanding questions pertaining to the rates of slow radio transients (e.g. radio supernovae, tidal disruption events, binary neutron star mergers, stellar flares, etc.), the false-positive foreground relevant for the radio and optical counterpart searches of gravitational wave sources, and the beaming factor of gamma-ray bursts. The need for rapid processing of the Jansky VLA data and near-real-time radio transient search has enabled the development of state-of-the-art software infrastructure. This thesis has successfully demonstrated the Jansky VLA as a powerful transient search instrument, and it serves as a pathfinder for the transient surveys planned for the SKA-mid pathfinder facilities, viz. ASKAP, MeerKAT, and WSRT/Apertif.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The disposal of sewage is the most important item in public sanitation. It is the most important present day problem in every city whether large or small. The direct cause of the majority of epidemics is the contamination of the water supply of the city by the excreta of man or animal. Public health varies directly as public sanitation, and if the public sanitation be good, the liability of sickness caused by contamination of the water supply is greatly lessened. When a city outgrows its sewerage system the public health becomes endangered. There are two causes for the increased amount of sewerage, increase in population and increase in industrial and manufacturing wastes. The main problem in this connection is the ultimate disposal of the matter which reaches the sewers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Os pesticidas etileno-bis-ditiocarbamatos da classe dos ditiocarbamatos estão entre os fungicidas mais empregados em todo o mundo para o controle de pragas. Muitos métodos para determinar ditiocarbamatos são baseados na hidrólise ácida em presença de cloreto estanoso e análise do CS2 gerado por diferentes técnicas. Nesse contexto, constituiram em objetivos do presente trabalho, como primeira etapa, o estudo de condições adequadas à estocagem de amostras de solo, e como segunda etapa, a avaliação das taxas de degradação e de lixiviação do fungicida mancozebe num cambissolo distrófico através do método espectrofotométrico. O sítio de estudo foi uma área delimitada de 36 m2, de uma cultura de couve, localizada em São Lourenço no 3 distrito do município de Nova Friburgo-RJ. As análises foram realizadas no laboratório de tecnologia ambiental (LABTAM/UERJ). Na primeira etapa, duas sub-amostras de solo contaminadas com mancozebe foram submetidas a tratamento com cloridrato de L-cisteina e estocadas às temperaturas ambiente e de -20C, sendo posteriormente analisadas em intervalos de 1, 7, 15 e 35 dias após a aplicação do fungicida. Outras duas sub-amostras não tratadas com cloridrato de L-cisteina foram submetidas às mesmas condições de temperatura e analisadas nos mesmos intervalos de tempo. Na segunda etapa, foi efetuada a aplicação do fungicida MANZATE 800 (Dupont Brasil, 80% mancozebe) na dose recomendada de 3,0 Kg ha-1 e coletadas amostras do solo nas profundidades de 0-10, 10-20 e 20-40 cm em intervalos de 2,5,8,12,15,18 e 35 dias após aplicação. As amostras de cada profundidade foram tratadas com cloridrato de L-cisteina e acondicionadas sob temperatura de -20C. Através dos resultados obtidos na primeira etapa, pôde-se concluir que o tratamento com cisteina foi eficaz para conservação do analito, tanto para a amostra mantida a -20C quanto para a amostra mantida à temperatura ambiente. Os dados obtidos na segunda etapa do estudo mostraram que mancozebe apresentou comportamento semelhante ao descrito na literatura, para persistência no solo. Os resultados de lixiviação mostraram que nas condições pelas quais foi conduzido o experimento, resíduos de mancozebe foram detectados em profundidades de até 40 cm, porém através dos modelos de potencial de lixiviação, concluiu-se que o fungicida não oferece risco de contaminação de águas subterrâneas

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polymer deposition is a serious problem associated with the etching of fused silica by use of inductively coupled plasma (ICP) technology, and it usually prevents further etching. We report an optimized etching condition under which no polymer deposition will occur for etching fused silica with ICP technology. Under the optimized etching condition, surfaces of the fabricated fused silica gratings are smooth and clean. Etch rate of fused silica is relatively high, and it demonstrates a linear relation between etched depth and working time. Results of the diffraction of gratings fabricated under the optimized etching condition match theoretical results well. (c) 2005 Optical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I. It was not possible to produce anti-tetracycline antibody in laboratory animals by any of the methods tried. Tetracycline protein conjugates were prepared and characterized. It was shown that previous reports of the detection of anti-tetracycline antibody by in vitro-methods were in error. Tetracycline precipitates non-specifically with serum proteins. The anaphylactic reaction reported was the result of misinterpretation, since the observations were inconsistent with the known mechanism of anaphylaxis and the supposed antibody would not sensitize guinea pig skin. The hemagglutination reaction was not reproducible and was extremely sensitive to minute amounts of microbial contamination. Both free tetracyclines and the conjugates were found to be poor antigens.

II. Anti-aspiryl antibodies were produced in rabbits using 3 protein carriers. The method of inhibition of precipitation was used to determine the specificity of the antibody produced. ε-Aminocaproate was found to be the most effective inhibitor of the haptens tested, indicating that the combining hapten of the protein is ε-aspiryl-lysyl. Free aspirin and salicylates were poor inhibitors and did not combine with the antibody to a significant extent. The ortho group was found to participate in the binding to antibody. The average binding constants were measured.

Normal rabbit serum was acetylated by aspirin under in vitro conditions, which are similar to physiological conditions. The extent of acetylation was determined by immunochemical tests. The acetylated serum proteins were shown to be potent antigens in rabbits. It was also shown that aspiryl proteins were partially acetylated. The relation of these results to human aspirin intolerance is discussed.

III. Aspirin did not induce contact sensitivity in guinea pigs when they were immunized by techniques that induce sensitivity with other reactive compounds. The acetylation mechanism is not relevant to this type of hypersensitivity, since sensitivity is not produced by potent acetylating agents like acetyl chloride and acetic anhydride. Aspiryl chloride, a totally artificial system, is a good sensitizer. Its specificity was examined.

IV. Protein conjugates were prepared with p-aminosalicylic acid and various carriers using azo, carbodiimide and mixed anhydride coupling. These antigens were injected into rabbits and guinea pigs and no anti-hapten IgG or IgM response was obtained. Delayed hypersensitivity was produced in guinea pigs by immunization with the conjugates, and its specificity was determined. Guinea pigs were not sensitized by either injections or topical application of p-amino-salicylic acid or p-aminosalicylate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inductively coupled plasma (ICP) technology is a new advanced version of dry-etching technology compared with the widely used method of reactive ion etching (RIE). Plasma processing of the ICP technology is complicated due to the mixed reactions among discharge physics, chemistry and surface chemistry. Extensive experiments have been done and microoptical elements have been fabricated successfully, which proved that the ICP technology is very effective in dry etching of microoptical elements. In this paper, we present the detailed fabrication of microoptical fused silica phase gratings with ICP technology. Optimized condition has been found to control the etching process of ICP technology and to improve the etching quality of microoptical elements greatly. With the optimized condition, we have fabricated lots of good gratings with different periods, depths, and duty cycles. The fabricated gratings are very useful in fields such as spectrometer, high-efficient filter in wavelength-division-multiplexing system, etc..

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The first chapter of this thesis deals with automating data gathering for single cell microfluidic tests. The programs developed saved significant amounts of time with no loss in accuracy. The technology from this chapter was applied to experiments in both Chapters 4 and 5.

The second chapter describes the use of statistical learning to prognose if an anti-angiogenic drug (Bevacizumab) would successfully treat a glioblastoma multiforme tumor. This was conducted by first measuring protein levels from 92 blood samples using the DNA-encoded antibody library platform. This allowed the measure of 35 different proteins per sample, with comparable sensitivity to ELISA. Two statistical learning models were developed in order to predict whether the treatment would succeed. The first, logistic regression, predicted with 85% accuracy and an AUC of 0.901 using a five protein panel. These five proteins were statistically significant predictors and gave insight into the mechanism behind anti-angiogenic success/failure. The second model, an ensemble model of logistic regression, kNN, and random forest, predicted with a slightly higher accuracy of 87%.

The third chapter details the development of a photocleavable conjugate that multiplexed cell surface detection in microfluidic devices. The method successfully detected streptavidin on coated beads with 92% positive predictive rate. Furthermore, chambers with 0, 1, 2, and 3+ beads were statistically distinguishable. The method was then used to detect CD3 on Jurkat T cells, yielding a positive predictive rate of 49% and false positive rate of 0%.

The fourth chapter talks about the use of measuring T cell polyfunctionality in order to predict whether a patient will succeed an adoptive T cells transfer therapy. In 15 patients, we measured 10 proteins from individual T cells (~300 cells per patient). The polyfunctional strength index was calculated, which was then correlated with the patient's progress free survival (PFS) time. 52 other parameters measured in the single cell test were correlated with the PFS. No statistical correlator has been determined, however, and more data is necessary to reach a conclusion.

Finally, the fifth chapter talks about the interactions between T cells and how that affects their protein secretion. It was observed that T cells in direct contact selectively enhance their protein secretion, in some cases by over 5 fold. This occurred for Granzyme B, Perforin, CCL4, TNFa, and IFNg. IL- 10 was shown to decrease slightly upon contact. This phenomenon held true for T cells from all patients tested (n=8). Using single cell data, the theoretical protein secretion frequency was calculated for two cells and then compared to the observed rate of secretion for both two cells not in contact, and two cells in contact. In over 90% of cases, the theoretical protein secretion rate matched that of two cells not in contact.