896 resultados para Simplified design method
Resumo:
This research initiative was triggered by the problems of water management of Polymer Electrolyte Membrane Fuel Cell (PEMFC). In low temperature fuel cells such as PEMFC, some of the water produced after the chemical reaction remains in its liquid state. Excess water produced by the fuel cell must be removed from the system to avoid flooding of the gas diffusion layers (GDL). The GDL is responsible for the transport of reactant gas to the active sites and remove the water produced from the sites. If the GDL is flooded, the supply gas will not be able to reach the reactive sites and the fuel cell fails. The choice of water removal method in this research is to exert a variable asymmetrical force on a liquid droplet. As the drop of liquid is subjected to an external vibrational force in the form of periodic wave, it will begin to oscillate. A fluidic oscillator is capable to produce a pulsating flow using simple balance of momentum fluxes between three impinging jets. By connecting the outputs of the oscillator to the gas channels of a fuel cell, a flow pulsation can be imposed on a water droplet formed within the gas channel during fuel cell operation. The lowest frequency produced by this design is approximately 202 Hz when a 20 inches feed-back port length was used and a supply pressure of 5 psig was introduced. This information was found by setting up a fluidic network with appropriate data acquisition. The components include a fluidic amplifier, valves and fittings, flow meters, a pressure gage, NI-DAQ system, Siglab®, Matlab software and four PCB microphones. The operating environment of the water droplet was reviewed, speed of the sound pressure which travels down the square channel was precisely estimated, and measurement devices were carefully selected. Applicable alternative measurement devices and its application to pressure wave measurement was considered. Methods for experimental setup and possible approaches were recommended, with some discussion of potential problems with implementation of this technique. Some computational fluid dynamic was also performed as an approach to oscillator design.
Resumo:
This report shares my efforts in developing a solid unit of instruction that has a clear focus on student outcomes. I have been a teacher for 20 years and have been writing and revising curricula for much of that time. However, most has been developed without the benefit of current research on how students learn and did not focus on what and how students are learning. My journey as a teacher has involved a lot of trial and error. My traditional method of teaching is to look at the benchmarks (now content expectations) to see what needs to be covered. My unit consists of having students read the appropriate sections in the textbook, complete work sheets, watch a video, and take some notes. I try to include at least one hands-on activity, one or more quizzes, and the traditional end-of-unit test consisting mostly of multiple choice questions I find in the textbook. I try to be engaging, make the lessons fun, and hope that at the end of the unit my students get whatever concepts I‘ve presented so that we can move on to the next topic. I want to increase students‘ understanding of science concepts and their ability to connect understanding to the real-world. However, sometimes I feel that my lessons are missing something. For a long time I have wanted to develop a unit of instruction that I know is an effective tool for the teaching and learning of science. In this report, I describe my efforts to reform my curricula using the “Understanding by Design” process. I want to see if this style of curriculum design will help me be a more effective teacher and if it will lead to an increase in student learning. My hypothesis is that this new (for me) approach to teaching will lead to increased understanding of science concepts among students because it is based on purposefully thinking about learning targets based on “big ideas” in science. For my reformed curricula I incorporate lessons from several outstanding programs I‘ve been involved with including EpiCenter (Purdue University), Incorporated Research Institutions for Seismology (IRIS), the Master of Science Program in Applied Science Education at Michigan Technological University, and the Michigan Association for Computer Users in Learning (MACUL). In this report, I present the methodology on how I developed a new unit of instruction based on the Understanding by Design process. I present several lessons and learning plans I‘ve developed for the unit that follow the 5E Learning Cycle as appendices at the end of this report. I also include the results of pilot testing of one of lessons. Although the lesson I pilot-tested was not as successful in increasing student learning outcomes as I had anticipated, the development process I followed was helpful in that it required me to focus on important concepts. Conducting the pilot test was also helpful to me because it led me to identify ways in which I could improve upon the lesson in the future.
Resumo:
Mobile Mesh Network based In-Transit Visibility (MMN-ITV) system facilitates global real-time tracking capability for the logistics system. In-transit containers form a multi-hop mesh network to forward the tracking information to the nearby sinks, which further deliver the information to the remote control center via satellite. The fundamental challenge to the MMN-ITV system is the energy constraint of the battery-operated containers. Coupled with the unique mobility pattern, cross-MMN behavior, and the large-spanned area, it is necessary to investigate the energy-efficient communication of the MMN-ITV system thoroughly. First of all, this dissertation models the energy-efficient routing under the unique pattern of the cross-MMN behavior. A new modeling approach, pseudo-dynamic modeling approach, is proposed to measure the energy-efficiency of the routing methods in the presence of the cross-MMN behavior. With this approach, it could be identified that the shortest-path routing and the load-balanced routing is energy-efficient in mobile networks and static networks respectively. For the MMN-ITV system with both mobile and static MMNs, an energy-efficient routing method, energy-threshold routing, is proposed to achieve the best tradeoff between them. Secondly, due to the cross-MMN behavior, neighbor discovery is executed frequently to help the new containers join the MMN, hence, consumes similar amount of energy as that of the data communication. By exploiting the unique pattern of the cross-MMN behavior, this dissertation proposes energy-efficient neighbor discovery wakeup schedules to save up to 60% of the energy for neighbor discovery. Vehicular Ad Hoc Networks (VANETs)-based inter-vehicle communications is by now growingly believed to enhance traffic safety and transportation management with low cost. The end-to-end delay is critical for the time-sensitive safety applications in VANETs, and can be a decisive performance metric for VANETs. This dissertation presents a complete analytical model to evaluate the end-to-end delay against the transmission range and the packet arrival rate. This model illustrates a significant end-to-end delay increase from non-saturated networks to saturated networks. It hence suggests that the distributed power control and admission control protocols for VANETs should aim at improving the real-time capacity (the maximum packet generation rate without causing saturation), instead of the delay itself. Based on the above model, it could be determined that adopting uniform transmission range for every vehicle may hinder the delay performance improvement, since it does not allow the coexistence of the short path length and the low interference. Clusters are proposed to configure non-uniform transmission range for the vehicles. Analysis and simulation confirm that such configuration can enhance the real-time capacity. In addition, it provides an improved trade off between the end-to-end delay and the network capacity. A distributed clustering protocol with minimum message overhead is proposed, which achieves low convergence time.
Resumo:
Gene-directed enzyme prodrug therapy is a form of cancer therapy in which delivery of a gene that encodes an enzyme is able to convert a prodrug, a pharmacologically inactive molecule, into a potent cytotoxin. Currently delivery of gene and prodrug is a two-step process. Here, we propose a one-step method using polymer nanocarriers to deliver prodrug, gene and cytotoxic drug simultaneously to malignant cells. Prodrugs acyclovir, ganciclovir and 5-doxifluridine were used to directly to initiate ring-opening polymerization of epsilon-caprolactone, forming a hydrophobic prodrug-tagged poly(epsilon-caprolactone) which was further grafted with hydrophilic polymers (methoxy poly(ethylene glycol), chitosan or polyethylenemine) to form amphiphilic copolymers for micelle formation. Successful synthesis of copolymers and micelle formation was confirmed by standard analytical means. Conversion of prodrugs to their cytotoxic forms was analyzed by both two-step and one-step means i.e. by first delivering gene plasmid into cell line HT29 and then challenging the cells with the prodrug-tagged micelle carriers and secondly by complexing gene plasmid onto micelle nanocarriers and delivery gene and prodrug simultaneously to parental HT29 cells. Anticancer effectiveness of prodrug-tagged micelles was further enhanced by encapsulating chemotherapy drugs doxorubicin or SN-38. Viability of colon cancer cell line HT29 was significantly reduced. Furthermore, in an effort to develop a stealth and targeted carrier, CD47-streptavidin fusion protein was attached onto the micelle surface utilizing biotin-streptavidin affinity. CD47, a marker of self on the red blood cell surface, was used for its antiphagocytic efficacy, results showed that micelles bound with CD47 showed antiphagocytic efficacy when exposed to J774A.1 macrophages. Since CD47 is not only an antiphagocytic ligand but also an integrin associated protein, it was used to target integrin alpha(v)beta(3), which is overexpressed on tumor-activated neovascular endothelial cells. Results showed that CD47-tagged micelles had enhanced uptake when treated to PC3 cells which have high expression of alpha(v)beta(3). The synthesized multifunctional polymeric micelle carriers developed could offer a new platform for an innovative cancer therapy regime.
Resumo:
Non-uniformity of steps within a flight is a major risk factor for falls. Guidelines and requirements for uniformity of step risers and tread depths assume the measurement system provides precise dimensional values. The state-of-the-art measurement system is a relatively new method, known as the nosing-to-nosing method. It involves measuring the distance between the noses of adjacent steps and the angle formed with the horizontal. From these measurements, the effective riser height and tread depth are calculated. This study was undertaken for the purpose of evaluating the measurement system to determine how much of total measurement variability comes from the step variations versus that due to repeatability and reproducibility (R&R) associated with the measurers. Using an experimental design quality control professionals call a measurement system experiment, two measurers measured all steps in six randomly selected flights, and repeated the process on a subsequent day. After marking each step in a flight in three lateral places (left, center, and right), the measurers took their measurement. This process yielded 774 values of riser height and 672 values of tread depth. Results of applying the Gage R&R ANOVA procedure in Minitab software indicated that the R&R contribution to riser height variability was 1.42%; and to tread depth was 0.50%. All remaining variability was attributed to actual step-to-step differences. These results may be compared with guidelines used in the automobile industry for measurement systems that consider R&R less than 1% as an acceptable measurement system; and R&R between 1% and 9% as acceptable depending on the application, the cost of the measuring device, cost of repair, or other factors.
Resumo:
Despite widespread use of species-area relationships (SARs), dispute remains over the most representative SAR model. Using data of small-scale SARs of Estonian dry grassland communities, we address three questions: (1) Which model describes these SARs best when known artifacts are excluded? (2) How do deviating sampling procedures (marginal instead of central position of the smaller plots in relation to the largest plot; single values instead of average values; randomly located subplots instead of nested subplots) influence the properties of the SARs? (3) Are those effects likely to bias the selection of the best model? Our general dataset consisted of 16 series of nested-plots (1 cm(2)-100 m(2), any-part system), each of which comprised five series of subplots located in the four corners and the centre of the 100-m(2) plot. Data for the three pairs of compared sampling designs were generated from this dataset by subsampling. Five function types (power, quadratic power, logarithmic, Michaelis-Menten, Lomolino) were fitted with non-linear regression. In some of the communities, we found extremely high species densities (including bryophytes and lichens), namely up to eight species in 1 cm(2) and up to 140 species in 100 m(2), which appear to be the highest documented values on these scales. For SARs constructed from nested-plot average-value data, the regular power function generally was the best model, closely followed by the quadratic power function, while the logarithmic and Michaelis-Menten functions performed poorly throughout. However, the relative fit of the latter two models increased significantly relative to the respective best model when the single-value or random-sampling method was applied, however, the power function normally remained far superior. These results confirm the hypothesis that both single-value and random-sampling approaches cause artifacts by increasing stochasticity in the data, which can lead to the selection of inappropriate models.
Resumo:
Abstract Radiation metabolomics employing mass spectral technologies represents a plausible means of high-throughput minimally invasive radiation biodosimetry. A simplified metabolomics protocol is described that employs ubiquitous gas chromatography-mass spectrometry and open source software including random forests machine learning algorithm to uncover latent biomarkers of 3 Gy gamma radiation in rats. Urine was collected from six male Wistar rats and six sham-irradiated controls for 7 days, 4 prior to irradiation and 3 after irradiation. Water and food consumption, urine volume, body weight, and sodium, potassium, calcium, chloride, phosphate and urea excretion showed major effects from exposure to gamma radiation. The metabolomics protocol uncovered several urinary metabolites that were significantly up-regulated (glyoxylate, threonate, thymine, uracil, p-cresol) and down-regulated (citrate, 2-oxoglutarate, adipate, pimelate, suberate, azelaate) as a result of radiation exposure. Thymine and uracil were shown to derive largely from thymidine and 2'-deoxyuridine, which are known radiation biomarkers in the mouse. The radiation metabolomic phenotype in rats appeared to derive from oxidative stress and effects on kidney function. Gas chromatography-mass spectrometry is a promising platform on which to develop the field of radiation metabolomics further and to assist in the design of instrumentation for use in detecting biological consequences of environmental radiation release.
Resumo:
The article introduces the E-learning Circle, a tool developed to assure the quality of the software design process of e-learning systems, considering pedagogical principles as well as technology. The E-learning Circle consists of a number of concentric circles which are divided into three sectors. The content of the inner circles is based on pedagogical principles, while the outer circle specifies how the pedagogical principles may be implemented with technology. The circle’s centre is dedicated to the subject taught, ensuring focus on the specific subject’s properties. The three sectors represent the student, the teacher and the learning objectives. The strengths of the E-learning Circle are the compact presentation combined with the overview it provides, as well as the usefulness of a design tool dealing with complexity, providing a common language and embedding best practice. The E-learning Circle is not a prescriptive method, but is useful in several design models and processes. The article presents two projects where the E-learning Circle was used as a design tool.
Resumo:
Die vorliegende Forschungsarbeit siedelt sich im Dreieck der Erziehungswissenschaften, der Informatik und der Schulpraxis an und besitzt somit einen starken interdisziplinären Charakter. Aus Sicht der Erziehungswissenschaften handelt es sich um ein Forschungsprojekt aus den Bereichen E-Learning und Multimedia Learning und der Fragestellung nach geeigneten Informatiksystemen für die Herstellung und den Austausch von digitalen, multimedialen und interaktiven Lernbausteinen. Dazu wurden zunächst methodisch-didaktische Vorteile digitaler Lerninhalte gegenüber klassischen Medien wie Buch und Papier zusammengetragen und mögliche Potentiale im Zusammenhang mit neuen Web 2.0-Technologien aufgezeigt. Darauf aufbauend wurde für existierende Autorenwerkzeuge zur Herstellung digitaler Lernbausteine und bestehende Austauschplattformen analysiert, inwieweit diese bereits Web 2.0-Technologien unterstützen und nutzen. Aus Sicht der Informatik ergab sich aus der Analyse bestehender Systeme ein Anforderungsprofil für ein neues Autorenwerkzeug und eine neue Austauschplattform für digitale Lernbausteine. Das neue System wurde nach dem Ansatz des Design Science Research in einem iterativen Entwicklungsprozess in Form der Webapplikation LearningApps.org realisiert und stetig mit Lehrpersonen aus der Schulpraxis evaluiert. Bei der Entwicklung kamen aktuelle Web-Technologien zur Anwendung. Das Ergebnis der Forschungsarbeit ist ein produktives Informatiksystem, welches bereits von tausenden Nutzern in verschiedenen Ländern sowohl in Schulen als auch in der Wirtschaft eingesetzt wird. In einer empirischen Studie konnte das mit der Systementwicklung angestrebte Ziel, die Herstellung und den Austausch von digitalen Lernbausteinen zu vereinfachen, bestätigt werden. Aus Sicht der Schulpraxis liefert LearningApps.org einen Beitrag zur Methodenvielfalt und zur Nutzung von ICT im Unterricht. Die Ausrichtung des Werkzeugs auf mobile Endgeräte und 1:1-Computing entspricht dem allgemeinen Trend im Bildungswesen. Durch die Verknüpfung des Werkzeugs mit aktuellen Software-Entwicklungen zur Herstellung von digitalen Schulbüchern werden auch Lehrmittelverlage als Zielgruppe angesprochen.
Resumo:
Sidexflexing plastic chains are used increasingly in material handling due to their highly flexible conveying design and layout options. These systems are often equipped with so called modular belts. Due to their specific force transmission, detailed calculation methods are not yet available. In the following, a generally valid calculation approach is derived and its difference to existing solutions shown by examples.
Resumo:
ABSTRACT ONTOLOGIES AND METHODS FOR INTEROPERABILITY OF ENGINEERING ANALYSIS MODELS (EAMS) IN AN E-DESIGN ENVIRONMENT SEPTEMBER 2007 NEELIMA KANURI, B.S., BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCES PILANI INDIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Ian Grosse Interoperability is the ability of two or more systems to exchange and reuse information efficiently. This thesis presents new techniques for interoperating engineering tools using ontologies as the basis for representing, visualizing, reasoning about, and securely exchanging abstract engineering knowledge between software systems. The specific engineering domain that is the primary focus of this report is the modeling knowledge associated with the development of engineering analysis models (EAMs). This abstract modeling knowledge has been used to support integration of analysis and optimization tools in iSIGHT FD , a commercial engineering environment. ANSYS , a commercial FEA tool, has been wrapped as an analysis service available inside of iSIGHT-FD. Engineering analysis modeling (EAM) ontology has been developed and instantiated to form a knowledge base for representing analysis modeling knowledge. The instances of the knowledge base are the analysis models of real world applications. To illustrate how abstract modeling knowledge can be exploited for useful purposes, a cantilever I-Beam design optimization problem has been used as a test bed proof-of-concept application. Two distinct finite element models of the I-beam are available to analyze a given beam design- a beam-element finite element model with potentially lower accuracy but significantly reduced computational costs and a high fidelity, high cost, shell-element finite element model. The goal is to obtain an optimized I-beam design at minimum computational expense. An intelligent KB tool was developed and implemented in FiPER . This tool reasons about the modeling knowledge to intelligently shift between the beam and the shell element models during an optimization process to select the best analysis model for a given optimization design state. In addition to improved interoperability and design optimization, methods are developed and presented that demonstrate the ability to operate on ontological knowledge bases to perform important engineering tasks. One such method is the automatic technical report generation method which converts the modeling knowledge associated with an analysis model to a flat technical report. The second method is a secure knowledge sharing method which allocates permissions to portions of knowledge to control knowledge access and sharing. Both the methods acting together enable recipient specific fine grain controlled knowledge viewing and sharing in an engineering workflow integration environment, such as iSIGHT-FD. These methods together play a very efficient role in reducing the large scale inefficiencies existing in current product design and development cycles due to poor knowledge sharing and reuse between people and software engineering tools. This work is a significant advance in both understanding and application of integration of knowledge in a distributed engineering design framework.
Resumo:
Previous research suggests that the personality of a relationship partner predicts not only the individual’s own satisfaction with the relationship but also the partner’s satisfaction. Based on the actor–partner interdependence model, the present research tested whether actor and partner effects of personality are biased when the same method (e.g., self-report) is used for the assessment of personality and relationship satisfaction and, consequently, shared method variance is not controlled for. Data came from 186 couples, of whom both partners provided self- and partner reports on the Big Five personality traits. Depending on the research design, actor effects were larger than partner effects (when using only self-reports), smaller than partner effects (when using only partner reports), or of about the same size as partner effects (when using self- and partner reports). The findings attest to the importance of controlling for shared method variance in dyadic data analysis.
Resumo:
Recently it has been proposed that the evaluation of effects of pollutants on aquatic organisms can provide an early warning system of potential environmental and human health risks (NRC 1991). Unfortunately there are few methods available to aquatic biologists to conduct assessments of the effects of pollutants on aquatic animal community health. The primary goal of this research was to develop and evaluate the feasibility of such a method. Specifically, the primary objective of this study was to develop a prototype rapid bioassessment technique similar to the Index of Biotic Integrity (IBI) for the upper Texas and Northwestern Gulf of Mexico coastal tributaries. The IBI consists of a series of "metrics" which describes specific attributes of the aquatic community. Each of these metrics are given a score which is then subtotaled to derive a total assessment of the "health" of the aquatic community. This IBI procedure may provide an additional assessment tool for professionals in water quality management.^ The experimental design consisted primarily of compiling previously collected data from monitoring conducted by the Texas Natural Resource Conservation Commission (TNRCC) at five bayous classified according to potential for anthropogenic impact and salinity regime. Standardized hydrological, chemical, and biological monitoring had been conducted in each of these watersheds. The identification and evaluation of candidate metrics for inclusion in the estuarine IBI was conducted through the use of correlation analysis, cluster analysis, stepwise and normal discriminant analysis, and evaluation of cumulative distribution frequencies. Scores of each included metric were determined based on exceedances of specific percentiles. Individual scores were summed and a total IBI score and rank for the community computed.^ Results of these analyses yielded the proposed metrics and rankings listed in this report. Based on the results of this study, incorporation of an estuarine IBI method as a water quality assessment tool is warranted. Adopted metrics were correlated to seasonal trends and less so to salinity gradients observed during the study (0-25 ppt). Further refinement of this method is needed using a larger more inclusive data set which includes additional habitat types, salinity ranges, and temporal variation. ^
Resumo:
To quantify species- specific relationships between bivalve carbonate isotope geochemistry ( delta O-18(c)) and water conditions ( temperature and salinity, related to water isotopic composition [delta O-18(w)]), an aquaculture-based methodology was developed and applied to Mytilus edulis ( blue mussel). The four- by- three factorial design consisted of four circulating temperature baths ( 7, 11, 15, and 19 degrees C) and three salinity ranges ( 23, 28, and 32 parts per thousand ( ppt); monitored for delta O-18(w) weekly). In mid- July of 2003, 4800 juvenile mussels were collected in Salt Bay, Damariscotta, Maine, and were placed in each configuration. The size distribution of harvested mussels, based on 105 specimens, ranged from 10.9 mm to 29.5 mm with a mean size of 19.8 mm. The mussels were grown in controlled conditions for up to 8.5 months, and a paleotemperature relationship based on juvenile M. edulis from Maine was developed from animals harvested at months 4, 5, and 8.5. This relationship [ T degrees C = 16.19 (+/- 0.14) - 4.69 (+/- 0.21) {delta O-18(c) VPBD - delta O-18(w) VSMOW} + 0.17 (+/- 0.13) {delta O-18(c) VPBD - delta O-18(w) VSMOW}(2); r(2) = 0.99; N = 105; P < 0.0001] is nearly identical to the Kim and O'Neil ( 1997) abiogenic calcite equation over the entire temperature range ( 7 - 19 degrees C), and it closely resembles the commonly used paleotemperature equations of Epstein et al. ( 1953) and Horibe and Oba ( 1972). Further, the comparison of the M. edulis paleotemperature equation with the Kim and O'Neil ( 1997) equilibrium- based equation indicates that M. edulis specimens used in this study precipitated their shell in isotopic equilibrium with ambient water within the experimental uncertainties of both studies. The aquaculture- based methodology described here allows similar species- specific isotope paleothermometer calibrations to be performed with other bivalve species and thus provides improved quantitative paleoenvironmental reconstructions.
Resumo:
OBJECTIVE This retrospective observational pilot study examined differences in peri-implant bone level changes (ΔIBL) between two similar implant types differing only in the surface texture of the neck. The hypothesis tested was that ΔIBL would be greater with machined-neck implants than with groovedneck implants. METHOD AND MATERIALS 40 patients were enrolled; n = 20 implants with machined (group 1) and n = 20 implants with a rough, grooved neck (group 2), all placed in the posterior mandible. Radiographs were obtained after loading (at 3 to 9 months) and at 12 to 18 months after implant insertion. Case number calculation with respect to ΔIBL was conducted. Groups were compared using a Brunner-Langer model, the Mann-Whitney test, the Wilcoxon signed rank test, and linear model analysis. RESULTS After the 12- to 18-month observation period, mean ΔIBL was -1.11 ± 0.92 mm in group 1 and -1.25 ± 1.23 mm in group 2. ΔIBL depended significantly on time (P < .001), but not on group. In both groups, mean marginal ΔIBL was significantly less than -1.5 mm. Only insertion depth had a significant influence on the amount of periimplant bone loss (P = .013). Case number estimate testing for a difference between group 1 and 2 with a power of 90% revealed a sample size per group of 1,032 subjects. CONCLUSION ΔIBL values indicated that both implant designs fulfilled implant success criteria, and the modification of implant neck texture had no significant influence on ΔIBL.