856 resultados para Master degree formation program
Resumo:
Creatinine levels in blood serum are typically used to assess renal function. Clinical determination of creatinine is often based on the Jaffe reaction, in which creatinine in the serum reacts with sodium picrate, resulting in a spectrophotometrically quantifiable product. Previous work from our lab has introduced an electrophoretically mediated initiation of this reaction, in which nanoliter plugs of individual reagent solutions can be added to the capillary and then mixed and reacted. Following electrophoretic separation of the product from excess reactant(s), the product can be directly determined on column. This work aims to gain a detailed understanding of the in-capillary reagent mixing dynamics, in-line reaction yield, and product degradation during electrophoresis, with an overall goal of improving assay sensitivity. One set of experiments focuses on maximizing product formation through manipulation of various conditions such as pH, voltage applied, and timing of the applied voltage, in addition to manipulations in the identity, concentration, and pH of the background electrolyte. Through this work, it was determined that dramatic changes in local voltage fields within the various reagent zones lead to ineffective reagent overlapping. Use of the software simulation program Simul 5 enabled visualization of the reaction dynamics within the capillary, specifically the wide variance between the electric field intensities within the creatinine and picrate zones. Because of this simulation work, the experimental method was modified to increase the ionic strength of the creatinine reagent zone to lower the local voltage field, thus producing more predictable and effective overlap conditions for the reagents and allowing the formation of more Jaffe product. As second set of experiments focuses on controlling the post-reaction product degradation. In that vein, we have systematically explored the importance of the identity, concentration, and pH of the background electrolyte on the post-reaction degradation rate of the product. Although prior work with borate background electrolytes indicated that product degradation was probably a function of the ionic strength of the background electrolyte, this work with a glycine background electrolyte demonstrates that degradation is in fact not a function of ionic strength of the background electrolyte. As the concentration and pH of the glycine background increased, the rate of degradation of product did not change dramatically, whereas in borate-buffered systems, the rate of Jaffe product degradation increased linearly with background electrolyte concentration above 100.0 mM borate. Similarly, increasing pH of the glycine background electrolyte did not result in a corresponding increase in product degradation, as it had with the borate background electrolyte. Other general trends that were observed include: increasing background electrolyte concentration increases peak efficiency and higher pH favors product formation; thus, it appears that use of a background electrolyte other than borate, such as glycine, the rate of degradation of the Jaffe product can be slowed, increasing the sensitivity of this in-line assay.
Resumo:
The rehabilitation of concrete structures, especially concrete bridge decks, is a major challenge for transportation agencies in the United States. Often, the most appropriate strategy to preserve or rehabilitate these structures is to provide some form of a protective coating or barrier. These surface treatments have typically been some form of polymer, asphalt, or low-permeability concrete, but the application of UHPC has shown promise for this application mainly due to its negligible permeability, but also as a result of its excellent mechanical properties, self-consolidating nature, rapid gain strength, and minimal creep and shrinkage characteristics. However, for widespread acceptance, durability and performance of the composite system must be fully understood, specifically the bond between UHPC and NSC often used in bridge decks. It is essential that the bond offers enough strength to resist the stress due to mechanical loading or thermal effects, while also maintaining an extended service-life performance. This report attempts to assess the bond strength between UHPC and NSC under different loading configurations. Different variables, such as roughness degree of the concrete substrates, age of bond, exposure to freeze-thaw cycles and wetting conditions of the concrete substrate, were included in this study. The combination of splitting tensile test with 0, 300, 600 and 900 freeze-thaw cycles was carried out to assess the bond performance under severe ambient conditions. The slant-shear test was utilized with different interface angles to provide a wide understanding of the bond performance under different combinations of compression and shear stresses. The pull-off test is the most accepted method to evaluate the bond strength in the field. This test which studies the direct tensile strength of the bond, the most severe loading condition, was used to provide data that can be correlated with the other tests that only can be used in the laboratory. The experimental program showed that the bond performance between UHPC and NSC is successful, as the strength regardless the different degree of roughness of the concrete substrate, the age of the composite specimens, the exposure to freeze-thaw cycles and the different loading configurations, is greater than that of concrete substrate and largely satisfies with ACI 546.3R-06.
Resumo:
Transformers are very important elements of any power system. Unfortunately, they are subjected to through-faults and abnormal operating conditions which can affect not only the transformer itself but also other equipment connected to the transformer. Thus, it is essential to provide sufficient protection for transformers as well as the best possible selectivity and sensitivity of the protection. Nowadays microprocessor-based relays are widely used to protect power equipment. Current differential and voltage protection strategies are used in transformer protection applications and provide fast and sensitive multi-level protection and monitoring. The elements responsible for detecting turn-to-turn and turn-to-ground faults are the negative-sequence percentage differential element and restricted earth-fault (REF) element, respectively. During severe internal faults current transformers can saturate and slow down the speed of relay operation which affects the degree of equipment damage. The scope of this work is to develop a modeling methodology to perform simulations and laboratory tests for internal faults such as turn-to-turn and turn-to-ground for two step-down power transformers with capacity ratings of 11.2 MVA and 290 MVA. The simulated current waveforms are injected to a microprocessor relay to check its sensitivity for these internal faults. Saturation of current transformers is also studied in this work. All simulations are performed with the Alternative Transients Program (ATP) utilizing the internal fault model for three-phase two-winding transformers. The tested microprocessor relay is the SEL-487E current differential and voltage protection relay. The results showed that the ATP internal fault model can be used for testing microprocessor relays for any percentage of turns involved in an internal fault. An interesting observation from the experiments was that the SEL-487E relay is more sensitive to turn-to-turn faults than advertized for the transformers studied. The sensitivity of the restricted earth-fault element was confirmed. CT saturation cases showed that low accuracy CTs can be saturated with a high percentage of turn-to-turn faults, where the CT burden will affect the extent of saturation. Recommendations for future work include more accurate simulation of internal faults, transformer energization inrush, and other scenarios involving core saturation, using the newest version of the internal fault model. The SEL-487E relay or other microprocessor relays should again be tested for performance. Also, application of a grounding bank to the delta-connected side of a transformer will increase the zone of protection and relay performance can be tested for internal ground faults on both sides of a transformer.
Resumo:
Personal photographs permeate our lives from the moment we are born as they define who we are within our familial group and local communities. Archived in family albums or framed on living room walls, they continue on after our death as mnemonic artifacts referencing our gendered, raced, and ethnic identities. This dissertation examines salient instances of what women “do” with personal photographs, not only as authors and subjects but also as collectors, archivists, and family and cultural historians. This project seeks to contribute to more productive, complex discourse about how women form relationships and engage with the conventions and practices of personal photography. In the first part of this dissertation I revisit developments in the history of personal photography, including the advertising campaigns of the Kodak and Agfa Girls and the development of albums such as the Stammbuch and its predecessor, the carte-de-visite, that demonstrate how personal photography has functioned as a gendered activity that references family unity, sentimentalism for the past, and self-representation within normative familial and dominant cultural groups, thus suggesting its importance as a cultural practice of identity formation. The second and primary section of the dissertation expands on the critical analyses of Gillian Rose, Patricia Holland, and Nancy Martha West, who propose that personal photography, marketed to and taken on by women, double-exposes their gendered identities. Drawing on work by critics such as Deborah Willis, bell hooks, and Abigail Solomon-Godeau, I examine how the reconfiguration, recontextualization, and relocation of personal photographs in the respective work of Christine Saari, Fern Logan, and Katie Knight interrogates and complicates gendered, raced, and ethnic identities and cultural attitudes about them. In the final section of the dissertation I briefly examine select examples of how emerging digital spaces on the Internet function as a site for personal photography, one that both reinscribes traditional cultural formations while offering new opportunities for women for the display and audiencing of identities outside the family.
Resumo:
State standardized testing has always been a tool to measure a school’s performance and to help evaluate school curriculum. However, with the school of choice legislation in 1992, the MEAP test became a measuring stick to grade schools by and a major tool in attracting school of choice students. Now, declining enrollment and a state budget struggling to stay out of the red have made school of choice students more important than ever before. MEAP scores have become the deciding factor in some cases. For the past five years, the Hancock Middle School staff has been working hard to improve their students’ MEAP scores in accordance with President Bush's “No Child Left Behind” legislation. In 2005, the school was awarded a grant that enabled staff to work for two years on writing and working towards school goals that were based on the improvement of MEAP scores in writing and math. As part of this effort, the school purchased an internet-based program geared at giving students practice on state content standards. This study examined the results of efforts by Hancock Middle School to help improve student scores in mathematics on the MEAP test through the use of an online program called “Study Island.” In the past, the program was used to remediate students, and as a review with an incentive at the end of the year for students completing a certain number of objectives. It had also been used as a review before upcoming MEAP testing in the fall. All of these methods may have helped a few students perform at an increased level on their standardized test, but the question remained of whether a sustained use of the program in a classroom setting would increase an understanding of concepts and performance on the MEAP for the masses. This study addressed this question. Student MEAP scores and Study Island data from experimental and comparison groups of students were compared to understand how a sustained use of Study Island in the classroom would impact student test scores on the MEAP. In addition, these data were analyzed to determine whether Study Island results provide a good indicator of students’ MEAP performance. The results of the study suggest that there were limited benefits related to sustained use of Study Island and gave some indications about the effectiveness of the mathematics curriculum at Hancock Middle School. These results and implications for instruction are discussed.
Resumo:
The degree of polarization of a refected field from active laser illumination can be used for object identifcation and classifcation. The goal of this study is to investigate methods for estimating the degree of polarization for refected fields with active laser illumination, which involves the measurement and processing of two orthogonal field components (complex amplitudes), two orthogonal intensity components, and the total field intensity. We propose to replace interferometric optical apparatuses with a computational approach for estimating the degree of polarization from two orthogonal intensity data and total intensity data. Cramer-Rao bounds for each of the three sensing modalities with various noise models are computed. Algebraic estimators and maximum-likelihood (ML) estimators are proposed. Active-set algorithm and expectation-maximization (EM) algorithm are used to compute ML estimates. The performances of the estimators are compared with each other and with their corresponding Cramer-Rao bounds. Estimators for four-channel polarimeter (intensity interferometer) sensing have a better performance than orthogonal intensities estimators and total intensity estimators. Processing the four intensities data from polarimeter, however, requires complicated optical devices, alignment, and four CCD detectors. It only requires one or two detectors and a computer to process orthogonal intensities data and total intensity data, and the bounds and estimator performances demonstrate that reasonable estimates may still be obtained from orthogonal intensities or total intensity data. Computational sensing is a promising way to estimate the degree of polarization.
Resumo:
Administrators of writing programs are regularly faced with the problem of assessing the learning that students gain in their coursework. Many methods of assessment exist, but most have some problems associated with them related to the amount of time it takes to perform the study or the scope of the knowledge gained relative to number of participants or volume of information collected. This pilot study investigates the use of surveys of student opinion for their potential to assess composition instruction at Michigan Technological University. The primary goal of this pilot study is to test the effectiveness of using data collected in surveys to make recommendations for improvement of the composition program at Michigan Tech. The report concludes with recommendations for additional study and refinements to the instruments used.
Resumo:
An optimizing compiler internal representation fundamentally affects the clarity, efficiency and feasibility of optimization algorithms employed by the compiler. Static Single Assignment (SSA) as a state-of-the-art program representation has great advantages though still can be improved. This dissertation explores the domain of single assignment beyond SSA, and presents two novel program representations: Future Gated Single Assignment (FGSA) and Recursive Future Predicated Form (RFPF). Both FGSA and RFPF embed control flow and data flow information, enabling efficient traversal program information and thus leading to better and simpler optimizations. We introduce future value concept, the designing base of both FGSA and RFPF, which permits a consumer instruction to be encountered before the producer of its source operand(s) in a control flow setting. We show that FGSA is efficiently computable by using a series T1/T2/TR transformation, yielding an expected linear time algorithm for combining together the construction of the pruned single assignment form and live analysis for both reducible and irreducible graphs. As a result, the approach results in an average reduction of 7.7%, with a maximum of 67% in the number of gating functions compared to the pruned SSA form on the SPEC2000 benchmark suite. We present a solid and near optimal framework to perform inverse transformation from single assignment programs. We demonstrate the importance of unrestricted code motion and present RFPF. We develop algorithms which enable instruction movement in acyclic, as well as cyclic regions, and show the ease to perform optimizations such as Partial Redundancy Elimination on RFPF.
Resumo:
The Collingwood Member is a mid to late Ordovician self-sourced reservoir deposited across the northern Michigan Basin and parts of Ontario, Canada. Although it had been previously studied in Canada, there has been relatively little data available from the Michigan subsurface. Recent commercial interest in the Collingwood has resulted in the drilling and production of several wells in the state of Michigan. An analysis of core samples, measured laboratory data, and petrophysical logs has yielded both a quantitative and qualitative understanding of the formation in the Michigan Basin. The Collingwood is a low permeability and low porosity carbonate package that is very high in organic content. It is composed primarily of a uniformly fine grained carbonate matrix with lesser amounts of kerogen, silica, and clays. The kerogen content of the Collingwood is finely dispersed in the clay and carbonate mineral phases. Geochemical and production data show that both oil and gas phases are present based on regional thermal maturity. The deposit is richest in the north-central part of the basin with thickest deposition and highest organic content. The Collingwood is a fairly thin deposit and vertical fractures may very easily extend into the surrounding formations. Completion and treatment techniques should be designed around these parameters to enhance production.
Resumo:
The Michigan Basin is located in the upper Midwest region of the United States and is centered geographically over the Lower Peninsula of Michigan. It is filled primarily with Paleozoic carbonates and clastics, overlying Precambrian basement rocks and covered by Pleistocene glacial drift. In Michigan, more than 46,000 wells have been drilled in the basin, many producing significant quantities of oil and gas since the 1920s in addition to providing a wealth of data for subsurface visualization. Well log tomography, formerly log-curve amplitude slicing, is a visualization method recently developed at Michigan Technological University to correlate subsurface data by utilizing the high vertical resolution of well log curves. The well log tomography method was first successfully applied to the Middle Devonian Traverse Group within the Michigan Basin using gamma ray log curves. The purpose of this study is to prepare a digital data set for the Middle Devonian Dundee and Rogers City Limestones, apply the well log tomography method to this data and from this application, interpret paleogeographic trends in the natural radioactivity. Both the Dundee and Rogers City intervals directly underlie the Traverse Group and combined are the most prolific reservoir within the Michigan Basin. Differences between this study and the Traverse Group include increased well control and “slicing” of a more uniform lithology. Gamma ray log curves for the Dundee and Rogers City Limestones were obtained from 295 vertical wells distributed over the Lower Peninsula of Michigan, converted to Log ASCII Standard files, and input into the well log tomography program. The “slicing” contour results indicate that during the formation of the Dundee and Rogers City intervals, carbonates and evaporites with low natural radioactive signatures on gamma ray logs were deposited. This contrasts the higher gamma ray amplitudes from siliciclastic deltas that cyclically entered the basin during Traverse Group deposition. Additionally, a subtle north-south, low natural radioactive trend in the center of the basin may correlate with previously published Dundee facies tracts. Prominent trends associated with the distribution of limestone and dolomite are not observed because the regional range of gamma ray values for both carbonates are equivalent in the Michigan Basin and additional log curves are needed to separate these lithologies.
Resumo:
BACKGROUND: Prophylactic exogenous surfactant therapy is a promising way to attenuate the ischemia and reperfusion (I/R) injury associated with lung transplantation and thereby to decrease the clinical occurrence of acute lung injury and acute respiratory distress syndrome. However, there is little information on the mode by which exogenous surfactant attenuates I/R injury of the lung. We hypothesized that exogenous surfactant may act by limiting pulmonary edema formation and by enhancing alveolar type II cell and lamellar body preservation. Therefore, we investigated the effect of exogenous surfactant therapy on the formation of pulmonary edema in different lung compartments and on the ultrastructure of the surfactant producing alveolar epithelial type II cells. METHODS: Rats were randomly assigned to a control, Celsior (CE) or Celsior + surfactant (CE+S) group (n = 5 each). In both Celsior groups, the lungs were flush-perfused with Celsior and subsequently exposed to 4 h of extracorporeal ischemia at 4 degrees C and 50 min of reperfusion at 37 degrees C. The CE+S group received an intratracheal bolus of a modified natural bovine surfactant at a dosage of 50 mg/kg body weight before flush perfusion. After reperfusion (Celsior groups) or immediately after sacrifice (Control), the lungs were fixed by vascular perfusion and processed for light and electron microscopy. Stereology was used to quantify edematous changes as well as alterations of the alveolar epithelial type II cells. RESULTS: Surfactant treatment decreased the intraalveolar edema formation (mean (coefficient of variation): CE: 160 mm3 (0.61) vs. CE+S: 4 mm3 (0.75); p < 0.05) and the development of atelectases (CE: 342 mm3 (0.90) vs. CE+S: 0 mm3; p < 0.05) but led to a higher degree of peribronchovascular edema (CE: 89 mm3 (0.39) vs. CE+S: 268 mm3 (0.43); p < 0.05). Alveolar type II cells were similarly swollen in CE (423 microm3(0.10)) and CE+S (481 microm3(0.10)) compared with controls (323 microm3(0.07); p < 0.05 vs. CE and CE+S). The number of lamellar bodies was increased and the mean lamellar body volume was decreased in both CE groups compared with the control group (p < 0.05). CONCLUSION: Intratracheal surfactant application before I/R significantly reduces the intraalveolar edema formation and development of atelectases but leads to an increased development of peribronchovascular edema. Morphological changes of alveolar type II cells due to I/R are not affected by surfactant treatment. The beneficial effects of exogenous surfactant therapy are related to the intraalveolar activity of the exogenous surfactant.
Resumo:
The unsupervised categorization of sensory stimuli is typically attributed to feedforward processing in a hierarchy of cortical areas. This purely sensory-driven view of cortical processing, however, ignores any internal modulation, e.g., by top-down attentional signals or neuromodulator release. To isolate the role of internal signaling on category formation, we consider an unbroken continuum of stimuli without intrinsic category boundaries. We show that a competitive network, shaped by recurrent inhibition and endowed with Hebbian and homeostatic synaptic plasticity, can enforce stimulus categorization. The degree of competition is internally controlled by the neuronal gain and the strength of inhibition. Strong competition leads to the formation of many attracting network states, each being evoked by a distinct subset of stimuli and representing a category. Weak competition allows more neurons to be co-active, resulting in fewer but larger categories. We conclude that the granularity of cortical category formation, i.e., the number and size of emerging categories, is not simply determined by the richness of the stimulus environment, but rather by some global internal signal modulating the network dynamics. The model also explains the salient non-additivity of visual object representation observed in the monkey inferotemporal (IT) cortex. Furthermore, it offers an explanation of a previously observed, demand-dependent modulation of IT activity on a stimulus categorization task and of categorization-related cognitive deficits in schizophrenic patients.
Resumo:
Technical communication certificates are offered by many colleges and universities as an alternative to a full undergraduate or graduate degree in the field. Despite certificates’ increasing popularity in recent years, however, surprisingly little commentary exists about them within the scholarly literature. In this work, I describe a survey of certificate and baccalaureate programs that I performed in 2008 in order to develop basic, descriptive data on programs’ age, size, and graduation rates; departmental location; curricular requirements; online offerings; and instructor status and qualifications. In performing this research, I apply recent insights from neosophistic rhetorical theory and feminist critiques of science to both articulate, and model, a feminist-sophistic methodology. I also suggest in this work that technical communication certificates can be theorized as a particularly sophistic credential for a particularly sophistic field, and I discuss the implications of neosophistic theory for certificate program design and administration.
Resumo:
Spacecraft formation flying navigation continues to receive a great deal of interest. The research presented in this dissertation focuses on developing methods for estimating spacecraft absolute and relative positions, assuming measurements of only relative positions using wireless sensors. The implementation of the extended Kalman filter to the spacecraft formation navigation problem results in high estimation errors and instabilities in state estimation at times. This is due tp the high nonlinearities in the system dynamic model. Several approaches are attempted in this dissertation aiming at increasing the estimation stability and improving the estimation accuracy. A differential geometric filter is implemented for spacecraft positions estimation. The differential geometric filter avoids the linearization step (which is always carried out in the extended Kalman filter) through a mathematical transformation that converts the nonlinear system into a linear system. A linear estimator is designed in the linear domain, and then transformed back to the physical domain. This approach demonstrated better estimation stability for spacecraft formation positions estimation, as detailed in this dissertation. The constrained Kalman filter is also implemented for spacecraft formation flying absolute positions estimation. The orbital motion of a spacecraft is characterized by two range extrema (perigee and apogee). At the extremum, the rate of change of a spacecraft’s range vanishes. This motion constraint can be used to improve the position estimation accuracy. The application of the constrained Kalman filter at only two points in the orbit causes filter instability. Two variables are introduced into the constrained Kalman filter to maintain the stability and improve the estimation accuracy. An extended Kalman filter is implemented as a benchmark for comparison with the constrained Kalman filter. Simulation results show that the constrained Kalman filter provides better estimation accuracy as compared with the extended Kalman filter. A Weighted Measurement Fusion Kalman Filter (WMFKF) is proposed in this dissertation. In wireless localizing sensors, a measurement error is proportional to the distance of the signal travels and sensor noise. In this proposed Weighted Measurement Fusion Kalman Filter, the signal traveling time delay is not modeled; however, each measurement is weighted based on the measured signal travel distance. The obtained estimation performance is compared to the standard Kalman filter in two scenarios. The first scenario assumes using a wireless local positioning system in a GPS denied environment. The second scenario assumes the availability of both the wireless local positioning system and GPS measurements. The simulation results show that the WMFKF has similar accuracy performance as the standard Kalman Filter (KF) in the GPS denied environment. However, the WMFKF maintains the position estimation error within its expected error boundary when the WLPS detection range limit is above 30km. In addition, the WMFKF has a better accuracy and stability performance when GPS is available. Also, the computational cost analysis shows that the WMFKF has less computational cost than the standard KF, and the WMFKF has higher ellipsoid error probable percentage than the standard Measurement Fusion method. A method to determine the relative attitudes between three spacecraft is developed. The method requires four direction measurements between the three spacecraft. The simulation results and covariance analysis show that the method’s error falls within a three sigma boundary without exhibiting any singularity issues. A study of the accuracy of the proposed method with respect to the shape of the spacecraft formation is also presented.
Resumo:
A significant cost for foundations is the design and installation of piles when they are required due to poor ground conditions. Not only is it important that piles be designed properly, but also that the installation equipment and total cost be evaluated. To assist in the evaluation of piles a number of methods have been developed. In this research three of these methods were investigated, which were developed by the Federal Highway Administration, the US Corps of Engineers and the American Petroleum Institute (API). The results from these methods were entered into the program GRLWEAPTM to assess the pile drivability and to provide a standard base for comparing the three methods. An additional element of this research was to develop EXCEL spreadsheets to implement these three methods. Currently the Army Corps and API methods do not have publicly available software and must be performed manually, which requires that data is taken off of figures and tables, which can introduce error in the prediction of pile capacities. Following development of the EXCEL spreadsheet, they were validated with both manual calculations and existing data sets to ensure that the data output is correct. To evaluate the three pile capacity methods data was utilized from four project sites from North America. The data included site geotechnical data along with field determined pile capacities. In order to achieve a standard comparison of the data, the pile capacities and geotechnical data from the three methods were entered into GRLWEAPTM. The sites consisted of both cohesive and cohesionless soils; where one site was primarily cohesive, one was primarily cohesionless, and the other two consisted of inter-bedded cohesive and cohesionless soils. Based on this limited set of data the results indicated that the US Corps of Engineers method more closely compared with the field test data, followed by the API method to a lesser degree. The DRIVEN program compared favorably in cohesive soils, but over predicted in cohesionless material.