962 resultados para Direct digital detector images
Resumo:
The push for improved fuel economy and reduced emissions has led to great achievements in engine performance and control. These achievements have increased the efficiency and power density of gasoline engines dramatically in the last two decades. With the added power density, thermal management of the engine has become increasingly important. Therefore it is critical to have accurate temperature and heat transfer models as well as data to validate them. With the recent adoption of the 2025 Corporate Average Fuel Economy(CAFE) standard, there has been a push to improve the thermal efficiency of internal combustion engines even further. Lean and dilute combustion regimes along with waste heat recovery systems are being explored as options for improving efficiency. In order to understand how these technologies will impact engine performance and each other, this research sought to analyze the engine from both a 1st law energy balance perspective, as well as from a 2nd law exergy analysis. This research also provided insights into the effects of various parameters on in-cylinder temperatures and heat transfer as well as provides data for validation of other models. It was found that the engine load was the dominant factor for the energy distribution, with higher loads resulting in lower coolant heat transfer and higher brake work and exhaust energy. From an exergy perspective, the exhaust system provided the best waste heat recovery potential due to its significantly higher temperatures compared to the cooling circuit. EGR and lean combustion both resulted in lower combustion chamber and exhaust temperatures; however, in most cases the increased flow rates resulted in a net increase in the energy in the exhaust. The exhaust exergy, on the other hand, was either increased or decreased depending on the location in the exhaust system and the other operating conditions. The effects of dilution from lean operation and EGR were compared using a dilution ratio, and the results showed that lean operation resulted in a larger increase in efficiency than the same amount of dilution with EGR. Finally, a method for identifying fuel spray impingement from piston surface temperature measurements was found. Note: The material contained in this section is planned for submission as part of a journal article and/or conference paper in the future.
Resumo:
Experimental work and analysis was done to investigate engine startup robustness and emissions of a flex-fuel spark ignition (SI) direct injection (DI) engine. The vaporization and other characteristics of ethanol fuel blends present a challenge at engine startup. Strategies to reduce the enrichment requirements for the first engine startup cycle and emissions for the second and third fired cycle at 25°C ± 1°C engine and intake air temperature were investigated. Research work was conducted on a single cylinder SIDI engine with gasoline and E85 fuels, to study the effect on first fired cycle of engine startup. Piston configurations that included a compression ratio change (11 vs 15.5) and piston geometry change (flattop vs bowl) were tested, along with changes in intake cam timing (95,110,125) and fuel pressure (0.4 MPa vs 3 MPa). The goal was to replicate the engine speed, manifold pressure, fuel pressure and testing temperature from an engine startup trace for investigating the first fired cycle for the engine. Results showed bowl piston was able to enable lower equivalence ratio engine starts with gasoline fuel, while also showing lower IMEP at the same equivalence ratio compared to flat top piston. With E85, bowl piston showed reduced IMEP as compression ratio increased at the same equivalence ratio. A preference for constant intake valve timing across fuels seemed to indicate that flattop piston might be a good flex-fuel piston. Significant improvements were seen with higher CR bowl piston with high fuel pressure starts, but showed no improvement with low fuel pressures. Simulation work was conducted to analyze initial three cycles of engine startup in GT-POWER for the same set of hardware used in the experimentations. A steady state validated model was modified for startup conditions. The results of which allowed an understanding of the relative residual levels and IMEP at the test points in the cam phasing space. This allowed selecting additional test points that enable use of higher residual levels, eliminating those with smaller trapped mass incapable of producing required IMEP for proper engine turnover. The second phase of experimental testing results for 2nd and 3rd startup cycle revealed both E10 and E85 prefer the same SOI of 240°bTDC at second and third startup cycle for the flat top piston and high injection pressures. E85 fuel optimal cam timing for startup showed that it tolerates more residuals compared to E10 fuel. Higher internal residuals drives down the Ø requirement for both fuels up to their combustion stability limit, this is thought to be direct benefit to vaporization due to increased cycle start temperature. Benefits are shown for an advance IMOP and retarded EMOP strategy at engine startup. Overall the amount of residuals preferred by an engine for E10 fuel at startup is thought to be constant across engine speed, thus could enable easier selection of optimized cam positions across the startup speeds.
Resumo:
Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user's memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors.
Resumo:
Clouds are one of the most influential elements of weather on the earth system, yet they are also one of the least understood. Understanding their composition and behavior at small scales is critical to understanding and predicting larger scale feedbacks. Currently, the best method to study clouds on the microscale is through airborne in situ measurements using optical instruments capable of resolving clouds on the individual particle level. However, current instruments are unable to sufficiently resolve the scales important to cloud evolution and behavior. The Holodec is a new generation of optical cloud instrument which uses digital inline holography to overcome many of the limitations of conventional instruments. However, its performance and reliability was limited due to several deficiencies in its original design. These deficiencies were addressed and corrected to advance the instrument from the prototype stage to an operational instrument. In addition, the processing software used to reconstruct and analyze digitally recorded holograms was improved upon to increase robustness and ease of use.
Resumo:
Tissue engineering and regenerative medicine have emerged in an effort to generate replacement tissues capable of restoring native tissue structure and function, but because of the complexity of biologic system, this has proven to be much harder than originally anticipated. Silica based bioactive glasses are popular as biomaterials because of their ability to enhance osteogenesis and angiogenesis. Sol-gel processing methods are popular in generating these materials because it offers: 1) mild processing conditions; 2) easily controlled structure and composition; 3) the ability to incorporate biological molecules; and 4) inherent biocompatibility. The goal of this work was to develop a bioactive vaporization system for the deposition of silica sol-gel particles as a means to modify the material properties of a substrate at the nano- and micro- level to better mimic the instructive conditions of native bone tissue, promoting appropriate osteoblast attachment, proliferation, and differentiation as a means for supporting bone tissue regeneration. The size distribution, morphology and degradation behavior of the vapor deposited sol-gel particles developed here were found to be dependent upon formulation (H2O:TMOS, pH, Ca/P incorporation) and manufacturing (substrate surface character, deposition time). Additionally, deposition of these particles onto substrates can be used to modify overall substrate properties including hydrophobicity, roughness, and topography. Deposition of Ca/P sol particles induced apatite-like mineral formation on both two- and three-dimensional materials when exposed to body fluids. Gene expression analysis suggests that Ca/P sol particles induce upregulation osteoblast gene expression (Runx2, OPN, OCN) in preosteoblasts during early culture time points. Upon further modification-specifically increasing particle stability-these Ca/P sol particles possess the potential to serve as a simple and unique means to modify biomaterial surface properties as a means to direct osteoblast differentiation.
Resumo:
PURPOSE: To determine if multi–detector row computed tomography (CT) can replace conventional radiography and be performed alone in severe trauma patients for the depiction of thoracolumbar spine fractures. MATERIALS AND METHODS: One hundred consecutive severe trauma patients who underwent conventional radiography of the thoracolumbar spine as well as thoracoabdominal multi–detector row CT were prospectively identified. Conventional radiographs were reviewed independently by three radiologists and two orthopedic surgeons; CT images were reviewed by three radiologists. Reviewers were blinded both to one another’s reviews and to the results of initial evaluation. Presence, location, and stability of fractures, as well as quality of reviewed images, were assessed. Statistical analysis was performed to determine sensitivity and interobserver agreement for each procedure, with results of clinical and radiologic follow-up as the standard of reference. The time to perform each examination and the radiation dose involved were evaluated. A resource cost analysis was performed. RESULTS: Sixty-seven fractured vertebrae were diagnosed in 26 patients. Twelve patients had unstable spine fractures. Mean sensitivity and interobserver agreement, respectively, for detection of unstable fractures were 97.2% and 0.951 for multi–detector row CT and 33.3% and 0.368 for conventional radiography. The median times to perform a conventional radiographic and a multi–detector row CT examination, respectively, were 33 and 40 minutes. Effective radiation doses at conventional radiography of the spine and thoracoabdominal multi–detector row CT, respectively, were 6.36 mSv and 19.42 mSv. Multi–detector row CT enabled identification of 146 associated traumatic lesions. The costs of conventional radiography and multi–detector row CT, respectively, were $145 and $880 per patient. CONCLUSION: Multi–detector row CT is a better examination for depicting spine fractures than conventional radiography. It can replace conventional radiography and be performed alone in patients who have sustained severe trauma.
Resumo:
BrainMaps.org is an interactive high-resolution digital brain atlas and virtual microscope that is based on over 20 million megapixels of scanned images of serial sections of both primate and non-primate brains and that is integrated with a high-speed database for querying and retrieving data about brain structure and function over the internet. Complete brain datasets for various species, including Homo sapiens, Macaca mulatta, Chlorocebus aethiops, Felis catus, Mus musculus, Rattus norvegicus, and Tyto alba, are accessible online. The methods and tools we describe are useful for both research and teaching, and can be replicated by labs seeking to increase accessibility and sharing of neuroanatomical data. These tools offer the possibility of visualizing and exploring completely digitized sections of brains at a sub-neuronal level, and can facilitate large-scale connectional tracing, histochemical and stereological analyses.
Resumo:
The long-awaited verdict by the German Federal Court of Justice towards Google image search has drawn much attention to the problem of copyright infringement by search engines on the Internet. In the past years the question has arose whether the listing itself in a search engine like Google can be an infringement of copyright. The decision is widely seen as one of the most important of the last years. With significant amount of effort, the German Fede- ral Court tried to balance the interests of the right holders and those of the digital reality.
Resumo:
New tools for editing of digital images, music and films have opened up new possibilities to enable wider circles of society to engage in ’artistic’ activities of different qualities. User-generated content has produced a plethora of new forms of artistic expression. One type of user-generated content is the mashup. Mashups are compositions that combine existing works (often) protected by copyright and transform them into new original creations. The European legislative framework has not yet reacted to the copyright problems provoked by mashups. Neither under the US fair use doctrine, nor under the strict corset of limitations and exceptions in Art 5 (2)-(3) of the Copyright Directive (2001/29/EC) have mashups found room to develop in a safe legal environment. The contribution analyzes the current European legal framework and identifies its insufficiencies with regard to enabling a legal mashup culture. By comparison with the US fair use approach, in particular the parody defense, a recent CJEU judgment serves as a comparative example. Finally, an attempt is made to suggest solutions for the European legislator, based on the policy proposals of the EU Commission’s “Digital Agenda” and more recent policy documents (e.g. “On Content in the Digital Market”, “Licenses for Europe”). In this context, a distinction is made between non-commercial mashup artists and the emerging commercial mashup scene.
Resumo:
A digital camera was used to obtain digital images of beef carcasses moving on the rail in commercial beef packing plants. These images were satisfactory for measurement of backfat thickness and area of ribeye. The measurements were closely correlated with the same two measurements taken from tracings on acetate paper of fat thickness and area of ribeye made on carcasses moving on the rail.
Resumo:
Background: Monitoring alcohol use is important in numerous situations. Direct ethanol metabolites, such as ethyl glucuronide (EtG), have been shown to be useful tools in detecting alcohol use and documenting abstinence. For very frequent or continuous control of abstinence, they lack practicability. Therefore, devices measuring ethanol itself might be of interest. This pilot study aims at elucidating the usability and accuracy of the cellular photo digital breathalyzer (CPDB) compared to self-reports in a naturalistic setting. Method: 12 social drinkers were included. Subjects used a CPDB 4 times daily, kept diaries of alcohol use and submitted urine for EtG testing over a period of 5 weeks. Results: In total, the 12 subjects reported 84 drinking episodes. 1,609 breath tests were performed and 55 urine EtG tests were collected. Of 84 drinking episodes, CPDB detected 98.8%. The compliance rate for breath testing was 96%. Of the 55 EtG tests submitted, 1 (1.8%) was positive. Conclusions: The data suggest that the CPDB device holds promise in detecting high, moderate, and low alcohol intake. It seems to have advantages compared to biomarkers and other Monitoring devices. The preference for CPDB by the participants might explain the high compliance. Further studies including comparison with biomarkers and transdermal devices are needed.
Resumo:
Morphometric investigations using a point and intersection counting strategy in the lung often are not able to reveal the full set of morphologic changes. This happens particularly when structural modifications are not expressed in terms of volume density changes and when rough and fine surface density alterations cancel each other at different magnifications. Making use of digital image processing, we present a methodological approach that allows to easily and quickly quantify changes of the geometrical properties of the parenchymal lung structure and reflects closely the visual appreciation of the changes. Randomly sampled digital images from light microscopic sections of lung parenchyma are filtered, binarized, and skeletonized. The lung septa are thus represented as a single-pixel wide line network with nodal points and end points and the corresponding internodal and end segments. By automatically counting the number of points and measuring the lengths of the skeletal segments, the lung architecture can be characterized and very subtle structural changes can be detected. This new methodological approach to lung structure analysis is highly sensitive to morphological changes in the parenchyma: it detected highly significant quantitative alterations in the structure of lungs of rats treated with a glucocorticoid hormone, where the classical morphometry had partly failed.
Resumo:
Glucocorticoids (GC) are successfully applied in neonatology to improve lung maturation in preterm born babies. Animal studies show that GC can also impair lung development. In this investigation, we used a new approach based on digital image analysis. Microscopic images of lung parenchyma were skeletonised and the geometrical properties of the septal network characterised by analysing the 'skeletal' parameters. Inhibition of the process of alveolarisation after extensive administration of small doses of GC in newborn rats was confirmed by significant changes in the 'skeletal' parameters. The induced structural changes in the lung parenchyma were still present after 60 days in adult rats, clearly indicating a long lasting or even definitive impairment of lung development and maturation caused by GC. Conclusion: digital image analysis and skeletonisation proved to be a highly suited approach to assess structural changes in lung parenchyma.
Resumo:
Problem: Dental radiographs generally display one or more findings/diagnoses, and are linked to a unique set of patient demographics, medical history and other findings not represented by the image. However, this information is not associated with radiographs in any type of meta format, and images are not searchable based on any clinical criteria (1,2). The purpose of this pilot study is to create an online, searchable data repository of dental radiographs to be used for patient care, teaching and research. [See PDF for complete abstract]
Resumo:
PURPOSE Computed tomography (CT) accounts for more than half of the total radiation exposure from medical procedures, which makes dose reduction in CT an effective means of reducing radiation exposure. We analysed the dose reduction that can be achieved with a new CT scanner [Somatom Edge (E)] that incorporates new developments in hardware (detector) and software (iterative reconstruction). METHODS We compared weighted volume CT dose index (CTDIvol) and dose length product (DLP) values of 25 consecutive patients studied with non-enhanced standard brain CT with the new scanner and with two previous models each, a 64-slice 64-row multi-detector CT (MDCT) scanner with 64 rows (S64) and a 16-slice 16-row MDCT scanner with 16 rows (S16). We analysed signal-to-noise and contrast-to-noise ratios in images from the three scanners and performed a quality rating by three neuroradiologists to analyse whether dose reduction techniques still yield sufficient diagnostic quality. RESULTS CTDIVol of scanner E was 41.5 and 36.4 % less than the values of scanners S16 and S64, respectively; the DLP values were 40 and 38.3 % less. All differences were statistically significant (p < 0.0001). Signal-to-noise and contrast-to-noise ratios were best in S64; these differences also reached statistical significance. Image analysis, however, showed "non-inferiority" of scanner E regarding image quality. CONCLUSIONS The first experience with the new scanner shows that new dose reduction techniques allow for up to 40 % dose reduction while still maintaining image quality at a diagnostically usable level.