15 resultados para real world context

em Digital Commons - Michigan Tech


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigated the use of real-world contexts during instruction in a high school physics class - through building file folder bridges- and the resulting effect upon student interest in the subject matter, level of understanding, and degree of retention. In particular, the study focused upon whether increases in student interest were attained through the use of real-world contexts, and if the elevated interest level led to a higher degree of subject matter understanding than would be achieved using more traditional teaching methods. The study also determined whether using real-world contexts ultimately resulted in achievement of greater levels of knowledge retention by students. Class observations during traditionally taught units and during units that incorporated real-world contexts, along with a post-graduation questionnaire, were used to assess differences in student interest levels. Student pre- and post-unit test scores were evaluated and compared to determine if statistical differences existed in levels of understanding resulting from the different teaching methods. The post-graduation questionnaire results provided evidence of retention that could be related back to teaching methods. The results of this study revealed the importance of incorporating real-world contexts into science and mathematics courses. Students better understood the relevance of the lessons, which led to higher levels of interest and greater understanding than was achieved through more traditional teaching methods. The use of real-world contexts improved knowledge retention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research project measured the effects of real-world content in a science classroom by determining change (deep knowledge of life science content, including ecosystems from MDE – Grade Level Content Expectations) in a subset of students (6th Grade Science) that may result from the addition of curriculum (real-world content of rearing trout in the classroom). Data showed large gains from the pre-test to post-test in students from both the experimental and control groups. The ecology unit with the implementation of real-world content [trout] was even more successful, and improved students’ deep knowledge of ecosystem content from Michigan’s Department of Education Grade Level Content Expectations. The gains by the experimental group on the constructed response section of the test, which included higher cognitive level items, were significant. Clinical interviews after the post-test confirmed increases in deep knowledge of ecosystem concepts in the experimental group, by revealing that a sample of experimental group students had a better grasp of important ecology concepts as compared to a sample of control group students.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to their high thermal efficiency, diesel engines have excellent fuel economy and have been widely used as a power source for many vehicles. Diesel engines emit less greenhouse gases (carbon dioxide) compared with gasoline engines. However, diesel engines emit large amounts of particulate matter (PM) which can imperil human health. The best way to reduce the particulate matter is by using the Diesel Particulate Filter (DPF) system which consists of a wall-flow monolith which can trap particulates, and the DPF can be periodically regenerated to remove the collected particulates. The estimation of the PM mass accumulated in the DPF and total pressure drop across the filter are very important in order to determine when to carry out the active regeneration for the DPF. In this project, by developing a filtration model and a pressure drop model, we can estimate the PM mass and the total pressure drop, then, these two models can be linked with a regeneration model which has been developed previously to predict when to regenerate the filter. There results of this project were: 1 Reproduce a filtration model and simulate the processes of filtration. By studying the deep bed filtration and cake filtration, stages and quantity of mass accumulated in the DPF can be estimated. It was found that the filtration efficiency increases faster during the deep-bed filtration than that during the cake filtration. A “unit collector” theory was used in our filtration model which can explain the mechanism of the filtration very well. 2 Perform a parametric study on the pressure drop model for changes in engine exhaust flow rate, deposit layer thickness, and inlet temperature. It was found that there are five primary variables impacting the pressure drop in the DPF which are temperature gradient along the channel, deposit layer thickness, deposit layer permeability, wall thickness, and wall permeability. 3 Link the filtration model and the pressure drop model with the regeneration model to determine the time to carry out the regeneration of the DPF. It was found that the regeneration should be initiated when the cake layer is at a certain thickness, since a cake layer with either too big or too small an amount of particulates will need more thermal energy to reach a higher regeneration efficiency. 4 Formulate diesel particulate trap regeneration strategies for real world driving conditions to find out the best desirable conditions for DPF regeneration. It was found that the regeneration should be initiated when the vehicle’s speed is high and during which there should not be any stops from the vehicle. Moreover, the regeneration duration is about 120 seconds and the inlet temperature for the regeneration is 710K.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since product take-back is mandated in Europe, and has effects for producers worldwide including the U.S., designing efficient forward and reverse supply chain networks is becoming essential for business viability. Centralizing production facilities may reduce costs but perhaps not environmental impacts. Decentralizing a supply chain may reduce transportation environmental impacts but increase capital costs. Facility location strategies of centralization or decentralization are tested for companies with supply chains that both take back and manufacture products. Decentralized and centralized production systems have different effects on the environment, industry and the economy. Decentralized production systems cluster suppliers within the geographical market region that the system serves. Centralized production systems have many suppliers spread out that meet all market demand. The point of this research is to help further the understanding of company decision-makers about impacts to the environment and costs when choosing a decentralized or centralized supply chain organizational strategy. This research explores; what degree of centralization for a supply chain makes the most financial and environmental sense for siting facilities; and which factories are in the best location to handle the financial and environmental impacts of particular processing steps needed for product manufacture. This research considered two examples of facility location for supply chains when products are taken back; the theoretical case involved shoe resoling and a real world case study considered the location of operations for a company that reclaims multiple products for use as material inputs. For the theoretical example a centralized strategy to facility location was optimal: whereas for the case study a decentralized strategy to facility location was best. In conclusion, it is not possible to say that a centralized or decentralized strategy to facility location is in general best for a company that takes back products. Each company’s specific concerns, needs, and supply chain details will determine which degree of centralization creates the optimal strategy for siting their facilities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation is a report on a collaborative project between the Computer Science and the Humanities Departments to develop case studies that focus on issues of communication in the workplace, and the results of their use in the classroom. My argument is that case study teaching simulates real-world experience in a meaningful way, essentially developing a teachable way of developing phronesis, the reasoned capacity to act for the good in public. In addition, it can be read as a "how-to" guide for educators who may wish to construct their own case studies. To that end, I have included a discussion of the ethnographic methodologies employed, and how it was adapted to our more pragmatic ends. Finally, I present my overarching argument for a new appraisal of the concept of techné. This reappraisal emphasizes its productive activity, poiesis, rather than focusing on its knowledge as has been the case in the past. I propose that focusing on the telos, the end outside the production, contributes to the diminishment, if not complete foreclosure, of a rich concept of techné.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This report shares my efforts in developing a solid unit of instruction that has a clear focus on student outcomes. I have been a teacher for 20 years and have been writing and revising curricula for much of that time. However, most has been developed without the benefit of current research on how students learn and did not focus on what and how students are learning. My journey as a teacher has involved a lot of trial and error. My traditional method of teaching is to look at the benchmarks (now content expectations) to see what needs to be covered. My unit consists of having students read the appropriate sections in the textbook, complete work sheets, watch a video, and take some notes. I try to include at least one hands-on activity, one or more quizzes, and the traditional end-of-unit test consisting mostly of multiple choice questions I find in the textbook. I try to be engaging, make the lessons fun, and hope that at the end of the unit my students get whatever concepts I‘ve presented so that we can move on to the next topic. I want to increase students‘ understanding of science concepts and their ability to connect understanding to the real-world. However, sometimes I feel that my lessons are missing something. For a long time I have wanted to develop a unit of instruction that I know is an effective tool for the teaching and learning of science. In this report, I describe my efforts to reform my curricula using the “Understanding by Design” process. I want to see if this style of curriculum design will help me be a more effective teacher and if it will lead to an increase in student learning. My hypothesis is that this new (for me) approach to teaching will lead to increased understanding of science concepts among students because it is based on purposefully thinking about learning targets based on “big ideas” in science. For my reformed curricula I incorporate lessons from several outstanding programs I‘ve been involved with including EpiCenter (Purdue University), Incorporated Research Institutions for Seismology (IRIS), the Master of Science Program in Applied Science Education at Michigan Technological University, and the Michigan Association for Computer Users in Learning (MACUL). In this report, I present the methodology on how I developed a new unit of instruction based on the Understanding by Design process. I present several lessons and learning plans I‘ve developed for the unit that follow the 5E Learning Cycle as appendices at the end of this report. I also include the results of pilot testing of one of lessons. Although the lesson I pilot-tested was not as successful in increasing student learning outcomes as I had anticipated, the development process I followed was helpful in that it required me to focus on important concepts. Conducting the pilot test was also helpful to me because it led me to identify ways in which I could improve upon the lesson in the future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The selective catalytic reduction system is a well established technology for NOx emissions control in diesel engines. A one dimensional, single channel selective catalytic reduction (SCR) model was previously developed using Oak Ridge National Laboratory (ORNL) generated reactor data for an iron-zeolite catalyst system. Calibration of this model to fit the experimental reactor data collected at ORNL for a copper-zeolite SCR catalyst is presented. Initially a test protocol was developed in order to investigate the different phenomena responsible for the SCR system response. A SCR model with two distinct types of storage sites was used. The calibration process was started with storage capacity calculations for the catalyst sample. Then the chemical kinetics occurring at each segment of the protocol was investigated. The reactions included in this model were adsorption, desorption, standard SCR, fast SCR, slow SCR, NH3 Oxidation, NO oxidation and N2O formation. The reaction rates were identified for each temperature using a time domain optimization approach. Assuming an Arrhenius form of the reaction rates, activation energies and pre-exponential parameters were fit to the reaction rates. The results indicate that the Arrhenius form is appropriate and the reaction scheme used allows the model to fit to the experimental data and also for use in real world engine studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fuzzy community detection is to identify fuzzy communities in a network, which are groups of vertices in the network such that the membership of a vertex in one community is in [0,1] and that the sum of memberships of vertices in all communities equals to 1. Fuzzy communities are pervasive in social networks, but only a few works have been done for fuzzy community detection. Recently, a one-step forward extension of Newman’s Modularity, the most popular quality function for disjoint community detection, results into the Generalized Modularity (GM) that demonstrates good performance in finding well-known fuzzy communities. Thus, GMis chosen as the quality function in our research. We first propose a generalized fuzzy t-norm modularity to investigate the effect of different fuzzy intersection operators on fuzzy community detection, since the introduction of a fuzzy intersection operation is made feasible by GM. The experimental results show that the Yager operator with a proper parameter value performs better than the product operator in revealing community structure. Then, we focus on how to find optimal fuzzy communities in a network by directly maximizing GM, which we call it Fuzzy Modularity Maximization (FMM) problem. The effort on FMM problem results into the major contribution of this thesis, an efficient and effective GM-based fuzzy community detection method that could automatically discover a fuzzy partition of a network when it is appropriate, which is much better than fuzzy partitions found by existing fuzzy community detection methods, and a crisp partition of a network when appropriate, which is competitive with partitions resulted from the best disjoint community detections up to now. We address FMM problem by iteratively solving a sub-problem called One-Step Modularity Maximization (OSMM). We present two approaches for solving this iterative procedure: a tree-based global optimizer called Find Best Leaf Node (FBLN) and a heuristic-based local optimizer. The OSMM problem is based on a simplified quadratic knapsack problem that can be solved in linear time; thus, a solution of OSMM can be found in linear time. Since the OSMM algorithm is called within FBLN recursively and the structure of the search tree is non-deterministic, we can see that the FMM/FBLN algorithm runs in a time complexity of at least O (n2). So, we also propose several highly efficient and very effective heuristic algorithms namely FMM/H algorithms. We compared our proposed FMM/H algorithms with two state-of-the-art community detection methods, modified MULTICUT Spectral Fuzzy c-Means (MSFCM) and Genetic Algorithm with a Local Search strategy (GALS), on 10 real-world data sets. The experimental results suggest that the H2 variant of FMM/H is the best performing version. The H2 algorithm is very competitive with GALS in producing maximum modularity partitions and performs much better than MSFCM. On all the 10 data sets, H2 is also 2-3 orders of magnitude faster than GALS. Furthermore, by adopting a simply modified version of the H2 algorithm as a mutation operator, we designed a genetic algorithm for fuzzy community detection, namely GAFCD, where elite selection and early termination are applied. The crossover operator is designed to make GAFCD converge fast and to enhance GAFCD’s ability of jumping out of local minimums. Experimental results on all the data sets show that GAFCD uncovers better community structure than GALS.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Free radicals are present in cigarette smoke and can have a negative effect on human health by attacking lipids, nucleic acids, proteins and other biologically important species. However, because of the complexity of the tobacco smoke system and the dynamic nature of radicals, little is known about the identity of the radicals, and debate continues on the mechanisms by which those radicals are produced. In this study, acetyl radicals were trapped from the gas phase using 3-amino-2, 2, 5, 5- tetramethyl-proxyl (3AP) on solid support to form stable 3AP adducts for later analysis by high performance liquid chromatography (HPLC), mass spectrometry/tandem mass spectrometry (MS-MS/MS) and liquid chromatography- mass spectrometry (LC-MS). Simulations of acetyl radical generation were performed using Matlab and the Master Chemical Mechanism (MCM) programs. A range of 10- 150 nmol/cigarette of acetyl radical was measured from gas phase tobacco smoke of both commerial and research cigarettes under several different smoking conditions. More radicals were detected from the puff smoking method compared to continuous flow sampling. Approximately twice as many acetyl radicals were trapped when a GF/F particle filter was placed before the trapping zone. Computational simulations show that NO/NO2 reacts with isoprene, initiating chain reactions to produce a hydroxyl radical, which abstracts hydrogen from acetaldehyde to generate acetyl radical. With initial concentrations of NO, acetaldehyde, and isoprene in a real-world cigarette smoke scenario, these mechanisms can account for the full amount of acetyl radical detected experimentally. This study contributes to the overall understanding of the free radical generation in gas phase cigarette smoke.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Heuristic optimization algorithms are of great importance for reaching solutions to various real world problems. These algorithms have a wide range of applications such as cost reduction, artificial intelligence, and medicine. By the term cost, one could imply that that cost is associated with, for instance, the value of a function of several independent variables. Often, when dealing with engineering problems, we want to minimize the value of a function in order to achieve an optimum, or to maximize another parameter which increases with a decrease in the cost (the value of this function). The heuristic cost reduction algorithms work by finding the optimum values of the independent variables for which the value of the function (the “cost”) is the minimum. There is an abundance of heuristic cost reduction algorithms to choose from. We will start with a discussion of various optimization algorithms such as Memetic algorithms, force-directed placement, and evolution-based algorithms. Following this initial discussion, we will take up the working of three algorithms and implement the same in MATLAB. The focus of this report is to provide detailed information on the working of three different heuristic optimization algorithms, and conclude with a comparative study on the performance of these algorithms when implemented in MATLAB. In this report, the three algorithms we will take in to consideration will be the non-adaptive simulated annealing algorithm, the adaptive simulated annealing algorithm, and random restart hill climbing algorithm. The algorithms are heuristic in nature, that is, the solution these achieve may not be the best of all the solutions but provide a means to reach a quick solution that may be a reasonably good solution without taking an indefinite time to implement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The amount of information contained within the Internet has exploded in recent decades. As more and more news, blogs, and many other kinds of articles that are published on the Internet, categorization of articles and documents are increasingly desired. Among the approaches to categorize articles, labeling is one of the most common method; it provides a relatively intuitive and effective way to separate articles into different categories. However, manual labeling is limited by its efficiency, even thought the labels selected manually have relatively high quality. This report explores the topic modeling approach of Online Latent Dirichlet Allocation (Online-LDA). Additionally, a method to automatically label articles with their latent topics by combining the Online-LDA posterior with a probabilistic automatic labeling algorithm is implemented. The goal of this report is to examine the accuracy of the labels generated automatically by a topic model and probabilistic relevance algorithm for a set of real-world, dynamically updated articles from an online Rich Site Summary (RSS) service.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sustainable development has only recently started examining the existing infrastructure, and a key aspect of this is hazard mitigation. To examine buildings under a sustainable perspective requires an understanding of a building's life-cycle environmental costs, including the consideration of associated environmental impacts induced by earthquake damage. Damage repair costs lead to additional material and energy consumption, leading to harmful environmental impacts. Merging results obtained from a seismic evaluation and life-cycle analysis for buildings will give a novel outlook on sustainable design decisions. To evaluate the environmental impacts caused by buildings, long-term impacts accrued throughout a building's lifetime and impacts associated with damage repair need to be quantified. A method and literature review for completing this examination has been developed and is discussed. Using software Athena and HAZUS-MH, this study evaluated the performance of steel and concrete buildings considering their life-cycle assessments and earthquake resistance. It was determined that code design-level greatly effects a building repair and damage estimations. This study presented two case study buildings and found specific results that were obtained using several premade assumptions. Future research recommendations were provided to make this methodology more useful in real-world applications. Examining cost and environmental impacts that a building has through, a cradle-to-grave analysis and seismic damage assessment will help reduce material consumption and construction activities from taking place before and after an earthquake event happens.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To analyze the characteristics and predict the dynamic behaviors of complex systems over time, comprehensive research to enable the development of systems that can intelligently adapt to the evolving conditions and infer new knowledge with algorithms that are not predesigned is crucially needed. This dissertation research studies the integration of the techniques and methodologies resulted from the fields of pattern recognition, intelligent agents, artificial immune systems, and distributed computing platforms, to create technologies that can more accurately describe and control the dynamics of real-world complex systems. The need for such technologies is emerging in manufacturing, transportation, hazard mitigation, weather and climate prediction, homeland security, and emergency response. Motivated by the ability of mobile agents to dynamically incorporate additional computational and control algorithms into executing applications, mobile agent technology is employed in this research for the adaptive sensing and monitoring in a wireless sensor network. Mobile agents are software components that can travel from one computing platform to another in a network and carry programs and data states that are needed for performing the assigned tasks. To support the generation, migration, communication, and management of mobile monitoring agents, an embeddable mobile agent system (Mobile-C) is integrated with sensor nodes. Mobile monitoring agents visit distributed sensor nodes, read real-time sensor data, and perform anomaly detection using the equipped pattern recognition algorithms. The optimal control of agents is achieved by mimicking the adaptive immune response and the application of multi-objective optimization algorithms. The mobile agent approach provides potential to reduce the communication load and energy consumption in monitoring networks. The major research work of this dissertation project includes: (1) studying effective feature extraction methods for time series measurement data; (2) investigating the impact of the feature extraction methods and dissimilarity measures on the performance of pattern recognition; (3) researching the effects of environmental factors on the performance of pattern recognition; (4) integrating an embeddable mobile agent system with wireless sensor nodes; (5) optimizing agent generation and distribution using artificial immune system concept and multi-objective algorithms; (6) applying mobile agent technology and pattern recognition algorithms for adaptive structural health monitoring and driving cycle pattern recognition; (7) developing a web-based monitoring network to enable the visualization and analysis of real-time sensor data remotely. Techniques and algorithms developed in this dissertation project will contribute to research advances in networked distributed systems operating under changing environments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Traditional decision making research has often focused on one's ability to choose from a set of prefixed options, ignoring the process by which decision makers generate courses of action (i.e., options) in-situ (Klein, 1993). In complex and dynamic domains, this option generation process is particularly critical to understanding how successful decisions are made (Zsambok & Klein, 1997). When generating response options for oneself to pursue (i.e., during the intervention-phase of decision making) previous research has supported quick and intuitive heuristics, such as the Take-The-First heuristic (TTF; Johnson & Raab, 2003). When generating predictive options for others in the environment (i.e., during the assessment-phase of decision making), previous research has supported the situational-model-building process described by Long Term Working Memory theory (LTWM; see Ward, Ericsson, & Williams, 2013). In the first three experiments, the claims of TTF and LTWM are tested during assessment- and intervention-phase tasks in soccer. To test what other environmental constraints may dictate the use of these cognitive mechanisms, the claims of these models are also tested in the presence and absence of time pressure. In addition to understanding the option generation process, it is important that researchers in complex and dynamic domains also develop tools that can be used by `real-world' professionals. For this reason, three more experiments were conducted to evaluate the effectiveness of a new online assessment of perceptual-cognitive skill in soccer. This test differentiated between skill groups and predicted performance on a previously established test and predicted option generation behavior. The test also outperformed domain-general cognitive tests, but not a domain-specific knowledge test when predicting skill group membership. Implications for theory and training, and future directions for the development of applied tools are discussed.