945 resultados para digital forensic tool testing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determination of chloride concentration in sweat is the current diagnostic gold standard for Cystic Fibrosis (CF). Nanoduct(R) is a new analyzing system measuring conductivity which requires only 3 microliters of sweat and gives results within 30 minutes. The aim of the study was to evaluate the applicability of this system in a clinical setting of three children's hospitals and borderline results were compared with sweat chloride concentration. Over 3 years, 1,041 subjects were tested and in 946 diagnostic results were obtained. In 95 children, Nanoduct(R) failed (9.1% failure rate), mainly due to failures in preterm babies and newborns. Assuming 59 mmol/L as an upper limit of normal conductivity, all our 46 CF patients were correctly diagnosed (sensitivity 100%, 95% CI: 93.1-100; negative predicted value 100% (95% CI: 99.6-100) and only 39 non CF's were false positive (39/900, 4.3%; specificity 95.7%, 95%CI: 94.2-96.9, positive predicted value 54.1% with a 95%CI: 43.4-65.0). Increasing the diagnostic limit to 80 mmol/L, the rate fell to 0.3% (3/900). CF patients had a median conductivity of 115 mmol/L; the non-CF a median of 37 mmol/L. In conclusion, the Nanoduct(R) test is a reliable diagnostic tool for CF diagnosis: It has a failure rate comparable to other sweat tests and can be used as a simple bedside test for fast and reliable exclusion, diagnosis or suspicion of CF. In cases with borderline conductivity (60-80 mmol/L) other additional methods (determination of chloride and genotyping) are indicated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The aim of this study was to compare the results of tendency-oriented perimetry (TOP) and a dynamic strategy in octopus perimetry as screening methods in clinical practice. DESIGN: A prospective single centre observational case series was performed. PARTICIPANTS AND METHODS: In a newly opened general ophthalmologic practice 89 consecutive patients (171 eyes) with a clinical indication for octopus static perimetry testing (ocular hypertension or suspicious optic nerve cupping) were examined prospectively with TOP and a dynamic strategy. The visual fields were graded by 3 masked observers as normal, borderline or abnormal without any further clinical information. RESULTS: 83% eyes showed the same result for both strategies. In 14% there was a small difference (with one visual field being abnormal or normal, the other being borderline). In only 2.9% of the eyes (5 cases) was there a contradictory result. In 4 out of 5 cases the dynamic visual field was abnormal and TOP was normal. 4 of these cases came back for a second examination. In all 4 the follow-up examination showed a normal second dynamic visual field. CONCLUSIONS: Octopus static perimetry using a TOP strategy is a fast, patient-friendly and very reliable screening tool for the general ophthalmological practice. We found no false-negative results in our series.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-invasive documentation methods such as surface scanning and radiological imaging are gaining in importance in the forensic field. These three-dimensional technologies provide digital 3D data, which are processed and handled in the computer. However, the sense of touch gets lost using the virtual approach. The haptic device enables the use of the sense of touch to handle and feel digital 3D data. The multifunctional application of a haptic device for forensic approaches is evaluated and illustrated in three different cases: the representation of bone fractures of the lower extremities, by traffic accidents, in a non-invasive manner; the comparison of bone injuries with the presumed injury-inflicting instrument; and in a gunshot case, the identification of the gun by the muzzle imprint, and the reconstruction of the holding position of the gun. The 3D models of the bones are generated from the Computed Tomography (CT) images. The 3D models of the exterior injuries, the injury-inflicting tools and the bone injuries, where a higher resolution is necessary, are created by the optical surface scan. The haptic device is used in combination with the software FreeForm Modelling Plus for touching the surface of the 3D models to feel the minute injuries and the surface of tools, to reposition displaced bone parts and to compare an injury-causing instrument with an injury. The repositioning of 3D models in a reconstruction is easier, faster and more precisely executed by means of using the sense of touch and with the user-friendly movement in the 3D space. For representation purposes, the fracture lines of bones are coloured. This work demonstrates that the haptic device is a suitable and efficient application in forensic science. The haptic device offers a new way in the handling of digital data in the virtual 3D space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dental identification is the most valuable method to identify human remains in single cases with major postmortem alterations as well as in mass casualties because of its practicability and demanding reliability. Computed tomography (CT) has been investigated as a supportive tool for forensic identification and has proven to be valuable. It can also scan the dentition of a deceased within minutes. In the present study, we investigated currently used restorative materials using ultra-high-resolution dual-source CT and the extended CT scale for the purpose of a color-encoded, in scale, and artifact-free visualization in 3D volume rendering. In 122 human molars, 220 cavities with 2-, 3-, 4- and 5-mm diameter were prepared. With presently used filling materials (different composites, temporary filling materials, ceramic, and liner), these cavities were restored in six teeth for each material and cavity size (exception amalgam n = 1). The teeth were CT scanned and images reconstructed using an extended CT scale. Filling materials were analyzed in terms of resulting Hounsfield units (HU) and filling size representation within the images. Varying restorative materials showed distinctively differing radiopacities allowing for CT-data-based discrimination. Particularly, ceramic and composite fillings could be differentiated. The HU values were used to generate an updated volume-rendering preset for postmortem extended CT scale data of the dentition to easily visualize the position of restorations, the shape (in scale), and the material used which is color encoded in 3D. The results provide the scientific background for the application of 3D volume rendering to visualize the human dentition for forensic identification purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the introduction of the mid-level ethanol blend gasoline fuel for commercial sale, the compatibility of different off-road engines is needed. This report details the test study of using one mid-level ethanol fuel in a two stroke hand held gasoline engine used to power line trimmers. The study sponsored by E3 is to test the effectiveness of an aftermarket spark plug from E3 Spark Plug when using a mid-level ethanol blend gasoline. A 15% ethanol by volume (E15) is the test mid-level ethanol used and the 10% ethanol by volume (E10) was used as the baseline fuel. The testing comprises running the engine at different load points and throttle positions to evaluate the cylinder head temperature, exhaust temperature and engine speed. Raw gas emissions were also measured to determine the impact of the performance spark plug. The low calorific value of the E15 fuel decreased the speed of the engine along with reduction in the fuel consumption and exhaust gas temperature. The HC emissions for E15 fuel and E3 spark plug increased when compared to the base line in most of the cases and NO formation was dependent on the cylinder head temperature. The E3 spark plug had a tendency to increase the temperature of the cylinder head irrespective of fuel type while reducing engine speed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We used differential GPS measurements from a 13 station GPS network spanning the Santa Ana Volcano and Coatepeque Caldera to characterize the inter-eruptive activity and tectonic movements near these two active and potentially hazardous features. Caldera-forming events occurred from 70-40 ka and at Santa Ana/Izalco volcanoes eruptive activity occurred as recently as 2005. Twelve differential stations were surveyed for 1 to 2 hours on a monthly basis from February through September 2009 and tied to a centrally located continuous GPS station, which serves as the reference site for this volcanic network. Repeatabilities of the averages from 20-minute sessions taken over 20 hours or longer range from 2-11 mm in the horizontal (north and east) components of the inter-station baselines, suggesting a lower detection limit for the horizontal components of any short-term tectonic or volcanic deformation. Repeatabilities of the vertical baseline component range from 12-34 mm. Analysis of the precipitable water vapor in the troposphere suggests that tropospheric decorrelation as a function of baseline lengths and variable site elevations are the most likely sources of vertical error. Differential motions of the 12 sites relative to the continuous reference site reveal inflation from February through July at several sites surrounding the caldera with vertical displacements that range from 61 mm to 139 mm followed by a lower magnitude deflation event on 1.8-7.4 km-long baselines. Uplift rates for the inflationary period reach 300 mm/yr with 1σ uncertainties of +/- 26 – 119 mm. Only one other station outside the caldera exhibits a similar deformation trend, suggesting a localized source. The results suggest that the use of differential GPS measurements from short duration occupations over short baselines can be a useful monitoring tool at sub-tropical volcanoes and calderas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transformers are very important elements of any power system. Unfortunately, they are subjected to through-faults and abnormal operating conditions which can affect not only the transformer itself but also other equipment connected to the transformer. Thus, it is essential to provide sufficient protection for transformers as well as the best possible selectivity and sensitivity of the protection. Nowadays microprocessor-based relays are widely used to protect power equipment. Current differential and voltage protection strategies are used in transformer protection applications and provide fast and sensitive multi-level protection and monitoring. The elements responsible for detecting turn-to-turn and turn-to-ground faults are the negative-sequence percentage differential element and restricted earth-fault (REF) element, respectively. During severe internal faults current transformers can saturate and slow down the speed of relay operation which affects the degree of equipment damage. The scope of this work is to develop a modeling methodology to perform simulations and laboratory tests for internal faults such as turn-to-turn and turn-to-ground for two step-down power transformers with capacity ratings of 11.2 MVA and 290 MVA. The simulated current waveforms are injected to a microprocessor relay to check its sensitivity for these internal faults. Saturation of current transformers is also studied in this work. All simulations are performed with the Alternative Transients Program (ATP) utilizing the internal fault model for three-phase two-winding transformers. The tested microprocessor relay is the SEL-487E current differential and voltage protection relay. The results showed that the ATP internal fault model can be used for testing microprocessor relays for any percentage of turns involved in an internal fault. An interesting observation from the experiments was that the SEL-487E relay is more sensitive to turn-to-turn faults than advertized for the transformers studied. The sensitivity of the restricted earth-fault element was confirmed. CT saturation cases showed that low accuracy CTs can be saturated with a high percentage of turn-to-turn faults, where the CT burden will affect the extent of saturation. Recommendations for future work include more accurate simulation of internal faults, transformer energization inrush, and other scenarios involving core saturation, using the newest version of the internal fault model. The SEL-487E relay or other microprocessor relays should again be tested for performance. Also, application of a grounding bank to the delta-connected side of a transformer will increase the zone of protection and relay performance can be tested for internal ground faults on both sides of a transformer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrochemical capacitors (ECs), also known as supercapacitors or ultracapacitors, are energy storage devices with properties between batteries and conventional capacitors. EC have evolved through several generations. The trend in EC is to combine a double-layer electrode with a battery-type electrode in an asymmetric capacitor configuration. The double-layer electrode is usually an activated carbon (AC) since it has high surface area, good conductivity, and relatively low cost. The battery-type electrode usually consists of PbO2 or Ni(OH)2. In this research, a graphitic carbon foam was impregnated with Co-substituted Ni(OH)2 using electrochemical deposition to serve as the positive electrode in the asymmetric capacitor. The purpose was to reduce the cost and weight of the ECs while maintaining or increasing capacitance and gravimetric energy storage density. The XRD result indicated that the nickel-carbon foam electrode was a typical α-Ni(OH)2. The specific capacitance of the nickel-carbon foam electrode was 2641 F/g at 5 mA/cm2, higher than the previously reported value of 2080 F/g for a 7.5% Al-substituted α-Ni(OH)2 electrode. Three different ACs (RP-20, YP-50F, and Ketjenblack EC-600JD) were evaluated through their morphology and electrochemical performance to determine their suitability for use in ECs. The study indicated that YP-50F demonstrated the better overall performance because of the combination of micropore and mesopore structures. Therefore, YP-50F was chosen to combine with the nickel-carbon foam electrode for further evaluation. Six cells with different mass ratios of negative to positive active mass were fabricated to study the electrochemical performance. Among the different mass ratios, the asymmetric capacitor with the mass ratio of 3.71 gave the highest specific energy and specific power, 24.5 W.h/kg and 498 W/kg, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

State standardized testing has always been a tool to measure a school’s performance and to help evaluate school curriculum. However, with the school of choice legislation in 1992, the MEAP test became a measuring stick to grade schools by and a major tool in attracting school of choice students. Now, declining enrollment and a state budget struggling to stay out of the red have made school of choice students more important than ever before. MEAP scores have become the deciding factor in some cases. For the past five years, the Hancock Middle School staff has been working hard to improve their students’ MEAP scores in accordance with President Bush's “No Child Left Behind” legislation. In 2005, the school was awarded a grant that enabled staff to work for two years on writing and working towards school goals that were based on the improvement of MEAP scores in writing and math. As part of this effort, the school purchased an internet-based program geared at giving students practice on state content standards. This study examined the results of efforts by Hancock Middle School to help improve student scores in mathematics on the MEAP test through the use of an online program called “Study Island.” In the past, the program was used to remediate students, and as a review with an incentive at the end of the year for students completing a certain number of objectives. It had also been used as a review before upcoming MEAP testing in the fall. All of these methods may have helped a few students perform at an increased level on their standardized test, but the question remained of whether a sustained use of the program in a classroom setting would increase an understanding of concepts and performance on the MEAP for the masses. This study addressed this question. Student MEAP scores and Study Island data from experimental and comparison groups of students were compared to understand how a sustained use of Study Island in the classroom would impact student test scores on the MEAP. In addition, these data were analyzed to determine whether Study Island results provide a good indicator of students’ MEAP performance. The results of the study suggest that there were limited benefits related to sustained use of Study Island and gave some indications about the effectiveness of the mathematics curriculum at Hancock Middle School. These results and implications for instruction are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With energy demands and costs growing every day, the need for improving energy efficiency in electrical devices has become very important. Research into various methods of improving efficiency for all electrical components will be a key to meet future energy needs. This report documents the design, construction, and testing of a research quality electric machine dynamometer and test bed. This test cell system can be used for research in several areas including: electric drives systems, electric vehicle propulsion systems, power electronic converters, load/source element in an AC Microgrid, as well as many others. The test cell design criteria, and decisions, will be discussed in reference to user functionality and flexibility. The individual power components will be discussed in detail to how they relate to the project, highlighting any feature used in operation of the test cell. A project timeline will be discussed, clearly stating the work done by the different individuals involved in the project. In addition, the system will be parameterized and benchmark data will be used to provide the functional operation of the system. With energy demands and costs growing every day, the need for improving energy efficiency in electrical devices has become very important. Research into various methods of improving efficiency for all electrical components will be a key to meet future energy needs. This report documents the design, construction, and testing of a research quality electric machine dynamometer and test bed. This test cell system can be used for research in several areas including: electric drives systems, electric vehicle propulsion systems, power electronic converters, load/source element in an AC Microgrid, as well as many others. The test cell design criteria, and decisions, will be discussed in reference to user functionality and flexibility. The individual power components will be discussed in detail to how they relate to the project, highlighting any feature used in operation of the test cell. A project timeline will be discussed, clearly stating the work done by the different individuals involved in the project. In addition, the system will be parameterized and benchmark data will be used to provide the functional operation of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has been a continuous evolutionary process in asphalt pavement design. In the beginning it was crude and based on past experience. Through research, empirical methods were developed based on materials response to specific loading at the AASHO Road Test. Today, pavement design has progressed to a mechanistic-empirical method. This methodology takes into account the mechanical properties of the individual layers and uses empirical relationships to relate them to performance. The mechanical tests that are used as part of this methodology include dynamic modulus and flow number, which have been shown to correlate with field pavement performance. This thesis was based on a portion of a research project being conducted at Michigan Technological University (MTU) for the Wisconsin Department of Transportation (WisDOT). The global scope of this project dealt with the development of a library of values as they pertain to the mechanical properties of the asphalt pavement mixtures paved in Wisconsin. Additionally, a comparison with the current associated pavement design to that of the new AASHTO Design Guide was conducted. This thesis describes the development of the current pavement design methodology as well as the associated tests as part of a literature review. This report also details the materials that were sampled from field operations around the state of Wisconsin and their testing preparation and procedures. Testing was conducted on available round robin and three Wisconsin mixtures and the main results of the research were: The test history of the Superpave SPT (fatigue and permanent deformation dynamic modulus) does not affect the mean response for both dynamic modulus and flow number, but does increase the variability in the test results of the flow number. The method of specimen preparation, compacting to test geometry versus sawing/coring to test geometry, does not statistically appear to affect the intermediate and high temperature dynamic modulus and flow number test results. The 2002 AASHTO Design Guide simulations support the findings of the statistical analyses that the method of specimen preparation did not impact the performance of the HMA as a structural layer as predicted by the Design Guide software. The methodologies for determining the temperature-viscosity relationship as stipulated by Witczak are sensitive to the viscosity test temperatures employed. The increase in asphalt binder content by 0.3% was found to actually increase the dynamic modulus at the intermediate and high test temperature as well as flow number. This result was based the testing that was conducted and was contradictory to previous research and the hypothesis that was put forth for this thesis. This result should be used with caution and requires further review. Based on the limited results presented herein, the asphalt binder grade appears to have a greater impact on performance in the Superpave SPT than aggregate angularity. Dynamic modulus and flow number was shown to increase with traffic level (requiring an increase in aggregate angularity) and with a decrease in air voids and confirm the hypotheses regarding these two factors. Accumulated micro-strain at flow number as opposed to the use of flow number appeared to be a promising measure for comparing the quality of specimens within a specific mixture. At the current time the Design Guide and its associate software needs to be further improved prior to implementation by owner/agencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Infrared thermography is a well-recognized non-destructive testing technique for evaluating concrete bridge elements such as bridge decks and piers. However, overcoming some obstacles and limitations are necessary to be able to add this invaluable technique to the bridge inspector's tool box. Infrared thermography is based on collecting radiant temperature and presenting the results as a thermal infrared image. Two methods considered in conducting an infrared thermography test include passive and active. The source of heat is the main difference between these two approaches of infrared thermography testing. Solar energy and ambient temperature change are the main heat sources in conducting a passive infrared thermography test, while active infrared thermography involves generating a temperature gradient using an external source of heat other than sun. Passive infrared thermography testing was conducted on three concrete bridge decks in Michigan. Ground truth information was gathered through coring several locations on each bridge deck to validate the results obtained from the passive infrared thermography test. Challenges associated with data collection and processing using passive infrared thermography are discussed and provide additional evidence to confirm that passive infrared thermography is a promising remote sensing tool for bridge inspections. To improve the capabilities of the infrared thermography technique for evaluation of the underside of bridge decks and bridge girders, an active infrared thermography technique using the surface heating method was developed in the laboratory on five concrete slabs with simulated delaminations. Results from this study demonstrated that active infrared thermography not only eliminates some limitations associated with passive infrared thermography, but also provides information regarding the depth of the delaminations. Active infrared thermography was conducted on a segment of an out-of-service prestressed box beam and cores were extracted from several locations on the beam to validate the results. This study confirms the feasibility of the application of active infrared thermography on concrete bridges and of estimating the size and depth of delaminations. From the results gathered in this dissertation, it was established that applying both passive and active thermography can provide transportation agencies with qualitative and quantitative measures for efficient maintenance and repair decision-making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strain rate significantly affects the strength of a material. The Split-Hopkinson Pressure Bar (SHPB) was initially used to study the effects of high strain rate (~103 1/s) testing of metals. Later modifications to the original technique allowed for the study of brittle materials such as ceramics, concrete, and rock. While material properties of wood for static and creep strain rates are readily available, data on the dynamic properties of wood are sparse. Previous work using the SHPB technique with wood has been limited in scope to variability of only a few conditions and tests of the applicability of the SHPB theory on wood have not been performed. Tests were conducted using a large diameter (3.0 inch (75 mm)) SHPB. The strain rate and total strain applied to a specimen are dependent on the striker bar length and velocity at impact. Pulse shapers are used to further modify the strain rate and change the shape of the strain pulse. A series of tests were used to determine test conditions necessary to produce a strain rate, total strain, and pulse shape appropriate for testing wood specimens. Hard maple, consisting of sugar maple (Acer saccharum) and black maple (Acer nigrum), and eastern white pine (Pinus strobus) specimens were used to represent a dense hardwood and a low-density soft wood. Specimens were machined to diameters of 2.5 and 3.0 inches and an assortment of lengths were tested to determine the appropriate specimen dimensions. Longitudinal specimens of 1.5 inch length and radial and tangential specimens of 0.5 inch length were found to be most applicable to SHPB testing. Stress/strain curves were generated from the SHPB data and validated with 6061-T6 aluminum and wood specimens. Stress was indirectly corroborated with gaged aluminum specimens. Specimen strain was assessed with strain gages, digital image analysis, and measurement of residual strain to confirm the strain calculated from SHPB data. The SHPB was found to be a useful tool in accurately assessing the material properties of wood under high strain rates (70 to 340 1/s) and short load durations (70 to 150 μs to compressive failure).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the development of genotyping and next-generation sequencing technologies, multi-marker testing in genome-wide association study and rare variant association study became active research areas in statistical genetics. This dissertation contains three methodologies for association study by exploring different genetic data features and demonstrates how to use those methods to test genetic association hypothesis. The methods can be categorized into in three scenarios: 1) multi-marker testing for strong Linkage Disequilibrium regions, 2) multi-marker testing for family-based association studies, 3) multi-marker testing for rare variant association study. I also discussed the advantage of using these methods and demonstrated its power by simulation studies and applications to real genetic data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report shares my efforts in developing a solid unit of instruction that has a clear focus on student outcomes. I have been a teacher for 20 years and have been writing and revising curricula for much of that time. However, most has been developed without the benefit of current research on how students learn and did not focus on what and how students are learning. My journey as a teacher has involved a lot of trial and error. My traditional method of teaching is to look at the benchmarks (now content expectations) to see what needs to be covered. My unit consists of having students read the appropriate sections in the textbook, complete work sheets, watch a video, and take some notes. I try to include at least one hands-on activity, one or more quizzes, and the traditional end-of-unit test consisting mostly of multiple choice questions I find in the textbook. I try to be engaging, make the lessons fun, and hope that at the end of the unit my students get whatever concepts I‘ve presented so that we can move on to the next topic. I want to increase students‘ understanding of science concepts and their ability to connect understanding to the real-world. However, sometimes I feel that my lessons are missing something. For a long time I have wanted to develop a unit of instruction that I know is an effective tool for the teaching and learning of science. In this report, I describe my efforts to reform my curricula using the “Understanding by Design” process. I want to see if this style of curriculum design will help me be a more effective teacher and if it will lead to an increase in student learning. My hypothesis is that this new (for me) approach to teaching will lead to increased understanding of science concepts among students because it is based on purposefully thinking about learning targets based on “big ideas” in science. For my reformed curricula I incorporate lessons from several outstanding programs I‘ve been involved with including EpiCenter (Purdue University), Incorporated Research Institutions for Seismology (IRIS), the Master of Science Program in Applied Science Education at Michigan Technological University, and the Michigan Association for Computer Users in Learning (MACUL). In this report, I present the methodology on how I developed a new unit of instruction based on the Understanding by Design process. I present several lessons and learning plans I‘ve developed for the unit that follow the 5E Learning Cycle as appendices at the end of this report. I also include the results of pilot testing of one of lessons. Although the lesson I pilot-tested was not as successful in increasing student learning outcomes as I had anticipated, the development process I followed was helpful in that it required me to focus on important concepts. Conducting the pilot test was also helpful to me because it led me to identify ways in which I could improve upon the lesson in the future.