902 resultados para Math Applications in Computer Science
Resumo:
OBJECTIVE: To evaluate tools for the fusion of images generated by tomography and structural and functional magnetic resonance imaging. METHODS: Magnetic resonance and functional magnetic resonance imaging were performed while a volunteer who had previously undergone cranial tomography performed motor and somatosensory tasks in a 3-Tesla scanner. Image data were analyzed with different programs, and the results were compared. RESULTS: We constructed a flow chart of computational processes that allowed measurement of the spatial congruence between the methods. There was no single computational tool that contained the entire set of functions necessary to achieve the goal. CONCLUSION: The fusion of the images from the three methods proved to be feasible with the use of four free-access software programs (OsiriX, Register, MRIcro and FSL). Our results may serve as a basis for building software that will be useful as a virtual tool prior to neurosurgery.
Resumo:
[EN]OpenCV includes di erent object detectors based on the Viola-Jones framework. Most of them are specialized to deal with the frontal face pattern and its inner elements: eyes, nose, and mouth. In this paper, we focus on the ear pattern detection, particularly when a head pro le or almost pro le view is present in the image. We aim at creating real-time ear detectors based on the general object detection framework provided with OpenCV. After training classi ers to detect left ears, right ears, and ears in general, the performance achieved is valid to be used to feed not only a head pose estimation system but also other applications such as those based on ear biometrics.
Resumo:
[EN]In this paper, we address the challenge of gender classi - cation using large databases of images with two goals. The rst objective is to evaluate whether the error rate decreases compared to smaller databases. The second goal is to determine if the classi er that provides the best classi cation rate for one database, improves the classi cation results for other databases, that is, the cross-database performance.
Resumo:
[EN]In this paper, we focus on gender recognition in challenging large scale scenarios. Firstly, we review the literature results achieved for the problem in large datasets, and select the currently hardest dataset: The Images of Groups. Secondly, we study the extraction of features from the face and its local context to improve the recognition accuracy. Diff erent descriptors, resolutions and classfii ers are studied, overcoming previous literature results, reaching an accuracy of 89.8%.
Resumo:
ZUSAMMENFASSUNG Die Tauglichkeit von Hybridmaterialien auf der Basis von Zinkphosphathydrat-Zementen zum Einsatz als korrosionshemmende anorganische Pigmente oder zur prothetischen und konservierenden Knochen- und Zahntherapie wird weltweit empirisch seit den neunziger Jahren intensiv erforscht. In der vorliegenden Arbeit wurden zuerst Referenzproben, d.h. alpha-und beta-Hopeite (Abk. a-,b-ZPT) dank eines hydrothermalen Kristallisationsverfahrens in wässerigem Milieu bei 20°C und 90°C hergestellt. Die Kristallstruktur beider Polymorphe des Zinkphosphattetrahydrats Zn3(PO4)2 4 H2O wurde komplett bestimmt. Einkristall-strukturanalyse zeigt, daß der Hauptunterschied zwischen der alpha-und beta-Form des Zinkphosphattetrahydrats in zwei verschiedenen Anordnungen der Wasserstoffbrücken liegt. Die entsprechenden drei- und zweidimensionalen Anordnungen der Wasserstoffbrücken der a-und b-ZPT induzieren jeweils unterschiedliches thermisches Verhalten beim Aufwärmen. Während die alpha-Form ihr Kristallwasser in zwei definierten Stufen verliert, erzeugt die beta-Form instabile Dehydratationsprodukt. Dieses entspricht zwei unabhängigen, aber nebeneinander ablaufenden Dehydratationsmechanismen: (i) bei niedrigen Heizraten einen zweidimensionalen Johnson-Mehl-Avrami (JMA) Mechanismus auf der (011) Ebene, der einerseits bevorzugt an Kristallkanten stattfindet und anderseits von existierenden Kristalldefekten auf Oberflächen gesteuert wird; (ii) bei hohen Heizraten einem zweidimensionalen Diffusionsmechanismus (D2), der zuerst auf der (101) Ebene und dann auf der (110) Ebene erfolgt. Durch die Betrachtung der ZPT Dehydratation als irreversibele heterogene Festkörperstufenreaktion wurde dank eines „ähnlichen Endprodukt“-Protokolls das Dehydratationsphasendiagramm aufgestellt. Es beschreibt die möglichen Zusammenhänge zwischen den verschiedenen Hydratationszuständen und weist auf die Existenz eines Übergangszustandes um 170°C (d.h. Reaktion b-ZPT a-ZPT) hin. Daneben wurde auch ein gezieltes chemisches Ätzverfahren mit verdünnten H3PO4- und NH3 Lösungen angewendet, um die ersten Stufe des Herauslösens von Zinkphosphat genau zu untersuchen. Allerdings zeigen alpha- und beta-Hopeite charakteristische hexagonale und kubische Ätzgruben, die sich unter kristallographischer Kontrolle verbreitern. Eine zuverlässige Beschreibung der Oberfächenchemie und Topologie konnte nur durch AFM und FFM Experimente erfolgen. Gleichzeitig konnte in dieser Weise die Oberflächendefektdichte und-verteilung und die Volumenauflösungsrate von a-ZPT und b-ZPT bestimmt werden. Auf einem zweiten Weg wurde eine innovative Strategie zur Herstellung von basischen Zinkphosphatpigmenten erster und zweiter Generation (d.h. NaZnPO4 1H2O und Na2ZnPO4(OH) 2H2O) mit dem Einsatz von einerseits oberflächenmodifizierten Polystyrolatices (z.B. produziert durch ein Miniemulsionspolymerisationsverfahren) und anderseits von Dendrimeren auf der Basis von Polyamidoamid (PAMAM) beschritten. Die erhaltene Zeolithstruktur (ZPO) hat in Abhängigkeit von steigendem Natrium und Wassergehalt unterschiedliche kontrollierte Morphologie: hexagonal, würfelförmig, herzförmig, sechsarmige Sterne, lanzettenförmige Dendrite, usw. Zur quantitativen Evaluierung des Polymereinbaus in der Kristallstruktur wurden carboxylierte fluoreszenzmarkierte Latices eingesetzt. Es zeigt sich, daß Polymeradditive nicht nur das Wachstum bis zu 8 µm.min-1 reduzierten. Trotzdem scheint es auch als starker Nukleationsbeschleuniger zu wirken. Dank der Koordinationschemie (d.h. Bildung eines sechszentrigen Komplexes L-COO-Zn-PO4*H2O mit Ligandenaustausch) konnten zwei einfache Mechanismen zur Wirkung von Latexpartikeln bei der ZPO Kristallisation aufgezeigt werden: (i) ein Intrakorona- und (ii) ein Extrakorona-Keimbildungsmechanismus. Weiterhin wurde die Effizienz eines Kurzzeit- und Langzeitkorrosionschutzes durch maßgeschneiderte ZPO/ZPT Pigmente und kontrollierte Freisetzung von Phosphationen in zwei Näherungen des Auslösungsgleichgewichts abgeschätzt: (i) durch eine Auswaschungs-methode (thermodynamischer Prozess) und (ii) durch eine pH-Impulsmethode (kinetischer Prozess. Besonders deutlich wird der Ausflösungs-Fällungsmechanismus (d.h. der Metamorphismus). Die wesentliche Rolle den Natriumionen bei der Korrosionshemmung wird durch ein passendes zusammensetzungsabhängiges Auflösungsmodell (ZAAM) beschrieben, das mit dem Befund des Salzsprühteste und der Feuchtigkeitskammertests konsistent ist. Schließlich zeigt diese Arbeit das herausragende Potential funktionalisierter Latices (Polymer) bei der kontrollierten Mineralisation zur Herstellung maßgeschneiderter Zinkphosphat Materialien. Solche Hybridmaterialien werden dringend in der Entwicklung umweltfreundlicher Korrosionsschutzpigmente sowie in der Dentalmedizin benötigt.
Resumo:
A proposal for a virtual museum of computer science
Resumo:
The biggest challenge facing software developers today is how to gracefully evolve complex software systems in the face of changing requirements. We clearly need software systems to be more dynamic, compositional and model-centric, but instead we continue to build systems that are static, baroque and inflexible. How can we better build change-enabled systems in the future? To answer this question, we propose to look back to one of the most successful systems to support change, namely Smalltalk. We briefly introduce Smalltalk with a few simple examples, and draw some lessons for software evolution. Smalltalk's simplicity, its reflective design, and its highly dynamic nature all go a long way towards enabling change in Smalltalk applications. We then illustrate how these lessons work in practice by reviewing a number of research projects that support software evolution by exploiting Smalltalk's design. We conclude by summarizing open issues and challenges for change-enabled systems of the future.
Resumo:
Following last two years’ workshop on dynamic languages at the ECOOP conference, the Dyla 2007 workshop was a successful and popular event. As its name implies, the workshop’s focus was on dynamic languages and their applications. Topics and discussions at the workshop included macro expansion mechanisms, extension of the method lookup algorithm, language interpretation, reflexivity and languages for mobile ad hoc networks. The main goal of this workshop was to bring together different dynamic language communities and favouring cross communities interaction. Dyla 2007 was organised as a full day meeting, partly devoted to presentation of submitted position papers and partly devoted to tool demonstration. All accepted papers can be downloaded from the workshop’s web site. In this report, we provide an overview of the presentations and a summary of discussions.
Resumo:
Non-invasive documentation methods such as surface scanning and radiological imaging are gaining in importance in the forensic field. These three-dimensional technologies provide digital 3D data, which are processed and handled in the computer. However, the sense of touch gets lost using the virtual approach. The haptic device enables the use of the sense of touch to handle and feel digital 3D data. The multifunctional application of a haptic device for forensic approaches is evaluated and illustrated in three different cases: the representation of bone fractures of the lower extremities, by traffic accidents, in a non-invasive manner; the comparison of bone injuries with the presumed injury-inflicting instrument; and in a gunshot case, the identification of the gun by the muzzle imprint, and the reconstruction of the holding position of the gun. The 3D models of the bones are generated from the Computed Tomography (CT) images. The 3D models of the exterior injuries, the injury-inflicting tools and the bone injuries, where a higher resolution is necessary, are created by the optical surface scan. The haptic device is used in combination with the software FreeForm Modelling Plus for touching the surface of the 3D models to feel the minute injuries and the surface of tools, to reposition displaced bone parts and to compare an injury-causing instrument with an injury. The repositioning of 3D models in a reconstruction is easier, faster and more precisely executed by means of using the sense of touch and with the user-friendly movement in the 3D space. For representation purposes, the fracture lines of bones are coloured. This work demonstrates that the haptic device is a suitable and efficient application in forensic science. The haptic device offers a new way in the handling of digital data in the virtual 3D space.
Resumo:
This dissertation serves as a call to geoscientists to share responsibility with K-12 educators for increasing Earth science literacy. When partnerships are created among K-12 educators and geoscientists, the synergy created can promote Earth science literacy in students, teachers, and the broader community. The research described here resulted in development of tools that can support effective professional development for teachers. One tool is used during the planning stages to structure a professional development program, another set of tools supports measurement of the effectiveness of a development program, and the third tool supports sustainability of professional development programs. The Michigan Teacher Excellence Program (MiTEP), a Math/Science Partnership project funded by the National Science Foundation, served as the test bed for developing and testing these tools. The first tool, the planning tool, is the Earth Science Literacy Principles (ESLP). The ESLP served as a planning tool for the two-week summer field courses as part of the MiTEP program. The ESLP, published in 2009, clearly describe what an Earth science literate person should know. The ESLP consists of nine big ideas and their supporting fundamental concepts. Using the ESLP for planning a professional development program assisted both instructors and teacher-participants focus on important concepts throughout the professional development activity. The measurement tools were developed to measure change in teachers’ Earth science content-area knowledge and perceptions related to teaching and learning that result from participating in a professional development program. The first measurement tool, the Earth System Concept Inventory (ESCI), directly measures content-area knowledge through a succession of multiple-choice questions that are aligned with the content of the professional development experience. The second measurement, an exit survey, collects qualitative data from teachers regarding their impression of the professional development. Both the ESCI and the exit survey were tested for validity and reliability. Lesson study is discussed here as a strategy for sustaining professional development in a school or a district after the end of a professional development activity. Lesson study, as described here, was offered as a formal course. Teachers engaged in lesson study worked collaboratively to design and test lessons that improve the teachers’ classroom practices. Data regarding the impact of the lesson study activity were acquired through surveys, written documents, and group interviews. The data are interpreted to indicate that the lesson study process improved teacher quality and classroom practices. In the case described here, the lesson study process was adopted by the teachers’ district and currently serves as part of the district’s work in Professional Learning Communities, resulting in ongoing professional development throughout the district.
Resumo:
Interactive TV technology has been addressed in many previous works, but there is sparse research on the topic of interactive content broadcasting and how to support the production process. In this article, the interactive broadcasting process is broadly defined to include studio technology and digital TV applications at consumer set-top boxes. In particular, augmented reality studio technology employs smart-projectors as light sources and blends real scenes with interactive computer graphics that are controlled at end-user terminals. Moreover, TV producer-friendly multimedia authoring tools empower the development of novel TV formats. Finally, the support for user-contributed content raises the potential to revolutionize the hierarchical TV production process, by introducing the viewer as part of content delivery chain.
Resumo:
This paper proposes the Optimized Power save Algorithm for continuous Media Applications (OPAMA) to improve end-user device energy efficiency. OPAMA enhances the standard legacy Power Save Mode (PSM) of IEEE 802.11 by taking into consideration application specific requirements combined with data aggregation techniques. By establishing a balanced cost/benefit tradeoff between performance and energy consumption, OPAMA is able to improve energy efficiency, while keeping the end-user experience at a desired level. OPAMA was assessed in the OMNeT++ simulator using real traces of variable bitrate video streaming applications. The results showed the capability to enhance energy efficiency, achieving savings up to 44% when compared with the IEEE 802.11 legacy PSM.