899 resultados para Image recognition and processing
Resumo:
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Resumo:
This paper presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the image foresting transform with a never-exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points), as compared to live wire for objects with complex shapes. This paper also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.
Resumo:
This paper compares the effectiveness of the Tsallis entropy over the classic Boltzmann-Gibbs-Shannon entropy for general pattern recognition, and proposes a multi-q approach to improve pattern analysis using entropy. A series of experiments were carried out for the problem of classifying image patterns. Given a dataset of 40 pattern classes, the goal of our image case study is to assess how well the different entropies can be used to determine the class of a newly given image sample. Our experiments show that the Tsallis entropy using the proposed multi-q approach has great advantages over the Boltzmann-Gibbs-Shannon entropy for pattern classification, boosting image recognition rates by a factor of 3. We discuss the reasons behind this success, shedding light on the usefulness of the Tsallis entropy and the multi-q approach. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Facial expression recognition is one of the most challenging research areas in the image recognition ¯eld and has been actively studied since the 70's. For instance, smile recognition has been studied due to the fact that it is considered an important facial expression in human communication, it is therefore likely useful for human–machine interaction. Moreover, if a smile can be detected and also its intensity estimated, it will raise the possibility of new applications in the future
Resumo:
Biological processes are very complex mechanisms, most of them being accompanied by or manifested as signals that reflect their essential characteristics and qualities. The development of diagnostic techniques based on signal and image acquisition from the human body is commonly retained as one of the propelling factors in the advancements in medicine and biosciences recorded in the recent past. It is a fact that the instruments used for biological signal and image recording, like any other acquisition system, are affected by non-idealities which, by different degrees, negatively impact on the accuracy of the recording. This work discusses how it is possible to attenuate, and ideally to remove, these effects, with a particular attention toward ultrasound imaging and extracellular recordings. Original algorithms developed during the Ph.D. research activity will be examined and compared to ones in literature tackling the same problems; results will be drawn on the base of comparative tests on both synthetic and in-vivo acquisitions, evaluating standard metrics in the respective field of application. All the developed algorithms share an adaptive approach to signal analysis, meaning that their behavior is not dependent only on designer choices, but driven by input signal characteristics too. Performance comparisons following the state of the art concerning image quality assessment, contrast gain estimation and resolution gain quantification as well as visual inspection highlighted very good results featured by the proposed ultrasound image deconvolution and restoring algorithms: axial resolution up to 5 times better than algorithms in literature are possible. Concerning extracellular recordings, the results of the proposed denoising technique compared to other signal processing algorithms pointed out an improvement of the state of the art of almost 4 dB.
Resumo:
Bread dough and particularly wheat dough, due to its viscoelastic behaviour, is probably the most dynamic and complicated rheological system and its characteristics are very important since they highly affect final products’ textural and sensorial properties. The study of dough rheology has been a very challenging task for many researchers since it can provide numerous information about dough formulation, structure and processing. This explains why dough rheology has been a matter of investigation for several decades. In this research rheological assessment of doughs and breads was performed by using empirical and fundamental methods at both small and large deformation, in order to characterize different types of doughs and final products such as bread. In order to study the structural aspects of food products, image analysis techniques was used for the integration of the information coming from empirical and fundamental rheological measurements. Evaluation of dough properties was carried out by texture profile analysis (TPA), dough stickiness (Chen and Hoseney cell) and uniaxial extensibility determination (Kieffer test) by using a Texture Analyser; small deformation rheological measurements, were performed on a controlled stress–strain rheometer; moreover the structure of different doughs was observed by using the image analysis; while bread characteristics were studied by using texture profile analysis (TPA) and image analysis. The objective of this research was to understand if the different rheological measurements were able to characterize and differentiate the different samples analysed. This in order to investigate the effect of different formulation and processing conditions on dough and final product from a structural point of view. For this aim the following different materials were performed and analysed: - frozen dough realized without yeast; - frozen dough and bread made with frozen dough; - doughs obtained by using different fermentation method; - doughs made by Kamut® flour; - dough and bread realized with the addition of ginger powder; - final products coming from different bakeries. The influence of sub-zero storage time on non-fermented and fermented dough viscoelastic performance and on final product (bread) was evaluated by using small deformation and large deformation methods. In general, the longer the sub-zero storage time the lower the positive viscoelastic attributes. The effect of fermentation time and of different type of fermentation (straight-dough method; sponge-and-dough procedure and poolish method) on rheological properties of doughs were investigated using empirical and fundamental analysis and image analysis was used to integrate this information throughout the evaluation of the dough’s structure. The results of fundamental rheological test showed that the incorporation of sourdough (poolish method) provoked changes that were different from those seen in the others type of fermentation. The affirmative action of some ingredients (extra-virgin olive oil and a liposomic lecithin emulsifier) to improve rheological characteristics of Kamut® dough has been confirmed also when subjected to low temperatures (24 hours and 48 hours at 4°C). Small deformation oscillatory measurements and large deformation mechanical tests performed provided useful information on the rheological properties of samples realized by using different amounts of ginger powder, showing that the sample with the highest amount of ginger powder (6%) had worse rheological characteristics compared to the other samples. Moisture content, specific volume, texture and crumb grain characteristics are the major quality attributes of bread products. The different sample analyzed, “Coppia Ferrarese”, “Pane Comune Romagnolo” and “Filone Terra di San Marino”, showed a decrease of crumb moisture and an increase in hardness over the storage time. Parameters such as cohesiveness and springiness, evaluated by TPA that are indicator of quality of fresh bread, decreased during the storage. By using empirical rheological tests we found several differences among the samples, due to the different ingredients used in formulation and the different process adopted to prepare the sample, but since these products are handmade, the differences could be account as a surplus value. In conclusion small deformation (in fundamental units) and large deformation methods showed a significant role in monitoring the influence of different ingredients used in formulation, different processing and storage conditions on dough viscoelastic performance and on final product. Finally the knowledge of formulation, processing and storage conditions together with the evaluation of structural and rheological characteristics is fundamental for the study of complex matrices like bakery products, where numerous variable can influence their final quality (e.g. raw material, bread-making procedure, time and temperature of the fermentation and baking).
Resumo:
DNA methylating compounds are widely used as anti-cancer chemotherapeutics. The pharmaceutical critical DNA lesion induced by these drugs is O6-methylguanine (O6MeG). O6MeG is highly mutagenic and genotoxic, by triggering apoptosis. Despite the potency of O6MeG to induce cell death, the mechanism of O6MeG induced toxicity is still poorly understood. Comparing the response of mouse fibroblasts wild-type (wt) and deficient for ataxia telangiectasia mutant protein (ATM), a kinase responsible for both the recognition and the signalling of DNA double-strand breaks (DSBs), it was shown that ATM deficient cells are more sensitive to the methylating agents N-methyl-N’-nitro-N-nitrosoguanidine (MNNG), methyl methansulfonate (MMS) and the anti-cancer drug temozolomide, in both colony formation and apoptosis assays. This clearly shows that DSBs are involved in O6MeG toxicity. By inactivating the O6MeG repair enzyme O6-methylguanine-DNA methyltransferase (MGMT) with the specific inhibitor O6-benzylguanine (O6BG), ATM wt and deficient cells became more sensitive to MNNG and MMS. The opposite effect was observed when over-expressing MGMT in ATM -/- cells. The results show that O6MeG is the critical DNA lesion causing death in ATM cells following MNNG treatment, and is partially responsible for the toxicity observed following MMS treatment. Furthermore, by inhibiting the ATM kinase activity with caffeine, it was shown that the resistance of wt cells to MNNG was due to the kinase activity of ATM, as wt cells underwent more apoptosis following methylating agent treatment in the presence of caffeine. Apoptosis and caspase-3 activation were late events, starting 48h after treatment. This lends support to the model where O6MeG lesions are converted into DSBs during replication. As ATM wt and deficient cells showed similar G2/M blockage and Chk1 activation following MNNG and MMS treatment, it was concluded that the protective effect of ATM is not due to cell cycle progression control. The hypersensitivity of ATM deficient cells was accompanied by their inability to activate the anti-apoptotic NFkB pathway. In a second part of this study, it was shown that the inflammatory cytokine IL-1 up-regulates the DNA repair gene apurinic endonuclease 2 (APEX2). Up-regulation of APEX2 occurred by transcriptional regulation as it was abrogated by actinomycin D. APEX2 mRNA accumulation was accompanied by increase in APEX2 protein level. IL-1 induced APEX2 expression as well as transfection of cells with APEX2 cDNA positively correlated with a decrease in apoptosis after treatment with genotoxic agents, particularly affecting cell death after H2O2. This indicates an involvement of APEX2 in the BER pathway in cells responding to IL-1.
Resumo:
The use of stone and its types of processing have been very important in the vernacular architecture of the cross-border Carso. In Carso this represents an important legacy of centuries and has a uniform typological characteristic to a great extent. The stone was the main constituent of the local architecture, setting and shaping the human environment, incorporating the history of places through their specific symbolic and constructive language. The primary aim of this research is the recognition of the constructive rules and the values embedded in the Carso rural architecture by use and processing of stone. Central to this investigation is the typological reading, aimed to analyze the constructive language expressed by this legacy, through the analysis of the relationship between type, technique and material.
Resumo:
In den letzten drei Jahrzehnten sind Fernerkundung und GIS in den Geowissenschaften zunehmend wichtiger geworden, um die konventionellen Methoden von Datensammlung und zur Herstellung von Landkarten zu verbessern. Die vorliegende Arbeit befasst sich mit der Anwendung von Fernerkundung und geographischen Informationssystemen (GIS) für geomorphologische Untersuchungen. Durch die Kombination beider Techniken ist es vor allem möglich geworden, geomorphologische Formen im Überblick und dennoch detailliert zu erfassen. Als Grundlagen werden in dieser Arbeit topographische und geologische Karten, Satellitenbilder und Klimadaten benutzt. Die Arbeit besteht aus 6 Kapiteln. Das erste Kapitel gibt einen allgemeinen Überblick über den Untersuchungsraum. Dieser umfasst folgende morphologische Einheiten, klimatischen Verhältnisse, insbesondere die Ariditätsindizes der Küsten- und Gebirgslandschaft sowie das Siedlungsmuster beschrieben. Kapitel 2 befasst sich mit der regionalen Geologie und Stratigraphie des Untersuchungsraumes. Es wird versucht, die Hauptformationen mit Hilfe von ETM-Satellitenbildern zu identifizieren. Angewandt werden hierzu folgende Methoden: Colour Band Composite, Image Rationing und die sog. überwachte Klassifikation. Kapitel 3 enthält eine Beschreibung der strukturell bedingten Oberflächenformen, um die Wechselwirkung zwischen Tektonik und geomorphologischen Prozessen aufzuklären. Es geht es um die vielfältigen Methoden, zum Beispiel das sog. Image Processing, um die im Gebirgskörper vorhandenen Lineamente einwandfrei zu deuten. Spezielle Filtermethoden werden angewandt, um die wichtigsten Lineamente zu kartieren. Kapitel 4 stellt den Versuch dar, mit Hilfe von aufbereiteten SRTM-Satellitenbildern eine automatisierte Erfassung des Gewässernetzes. Es wird ausführlich diskutiert, inwieweit bei diesen Arbeitsschritten die Qualität kleinmaßstäbiger SRTM-Satellitenbilder mit großmaßstäbigen topographischen Karten vergleichbar ist. Weiterhin werden hydrologische Parameter über eine qualitative und quantitative Analyse des Abflussregimes einzelner Wadis erfasst. Der Ursprung von Entwässerungssystemen wird auf der Basis geomorphologischer und geologischer Befunde interpretiert. Kapitel 5 befasst sich mit der Abschätzung der Gefahr episodischer Wadifluten. Die Wahrscheinlichkeit ihres jährlichen Auftretens bzw. des Auftretens starker Fluten im Abstand mehrerer Jahre wird in einer historischen Betrachtung bis 1921 zurückverfolgt. Die Bedeutung von Regentiefs, die sich über dem Roten Meer entwickeln, und die für eine Abflussbildung in Frage kommen, wird mit Hilfe der IDW-Methode (Inverse Distance Weighted) untersucht. Betrachtet werden außerdem weitere, regenbringende Wetterlagen mit Hilfe von Meteosat Infrarotbildern. Genauer betrachtet wird die Periode 1990-1997, in der kräftige, Wadifluten auslösende Regenfälle auftraten. Flutereignisse und Fluthöhe werden anhand von hydrographischen Daten (Pegelmessungen) ermittelt. Auch die Landnutzung und Siedlungsstruktur im Einzugsgebiet eines Wadis wird berücksichtigt. In Kapitel 6 geht es um die unterschiedlichen Küstenformen auf der Westseite des Roten Meeres zum Beispiel die Erosionsformen, Aufbauformen, untergetauchte Formen. Im abschließenden Teil geht es um die Stratigraphie und zeitliche Zuordnung von submarinen Terrassen auf Korallenriffen sowie den Vergleich mit anderen solcher Terrassen an der ägyptischen Rotmeerküste westlich und östlich der Sinai-Halbinsel.
Resumo:
The temporal bone is ideal for low-dose CT because of its intrinsic high contrast. The aim of this study was to retrospectively evaluate image quality and radiation doses of a new low-dose versus a standard high-dose pediatric temporal bone CT protocol and to review dosimetric data from the literature.
Resumo:
We tested normal young and elderly adults and elderly Alzheimer’s disease (AD) patients on recognition memory for tunes. In Experiment 1, AD patients and age-matched controls received a study list and an old/new recognition test of highly familiar, traditional tunes, followed by a study list and test of novel tunes. The controls performed better than did the AD patients. The controls showed the “mirror effect” of increased hits and reduced false alarms for traditional versus novel tunes, whereas the patients false-alarmed as often to traditional tunes as to novel tunes. Experiment 2 compared young adults and healthy elderly persons using a similar design. Performance was lower in the elderly group, but both younger and older subjects showed the mirror effect. Experiment 3 produced confusion between preexperimental familiarity and intraexperimental familiarity by mixing traditional and novel tunes in the study lists and tests. Here, the subjects in both age groups resembled the patients of Experiment 1 in failing to show the mirror effect. Older subjects again performed more poorly, and they differed qualitatively from younger subjects in setting stricter criteria for more nameable tunes. Distinguishing different sources of global familiarity is a factor in tune recognition, and the data suggest that this type of source monitoring is impaired in AD and involves different strategies in younger and older adults.
Resumo:
For a fluid dynamics experimental flow measurement technique, particle image velocimetry (PIV) provides significant advantages over other measurement techniques in its field. In contrast to temperature and pressure based probe measurements or other laser diagnostic techniques including laser Doppler velocimetry (LDV) and phase Doppler particle analysis (PDPA), PIV is unique due to its whole field measurement capability, non-intrusive nature, and ability to collect a vast amount of experimental data in a short time frame providing both quantitative and qualitative insight. These properties make PIV a desirable measurement technique for studies encompassing a broad range of fluid dynamics applications. However, as an optical measurement technique, PIV also requires a substantial technical understanding and application experience to acquire consistent, reliable results. Both a technical understanding of particle image velocimetry and practical application experience are gained by applying a planar PIV system at Michigan Technological University’s Combustion Science Exploration Laboratory (CSEL) and Alternative Fuels Combustion Laboratory (AFCL). Here a PIV system was applied to non-reacting and reacting gaseous environments to make two component planar PIV as well as three component stereographic PIV flow field velocity measurements in conjunction with chemiluminescence imaging in the case of reacting flows. This thesis outlines near surface flow field characteristics in a tumble strip lined channel, three component velocity profiles of non-reacting and reacting swirled flow in a swirl stabilized lean condition premixed/prevaporized-fuel model gas turbine combustor operating on methane at 5-7 kW, and two component planar PIV measurements characterizing the AFCL’s 1.1 liter closed combustion chamber under dual fan driven turbulent mixing flow.
Resumo:
The purpose of this retrospective study was to intra-individually compare the image quality of computed radiography (CR) and low-dose linear-slit digital radiography (LSDR) for supine chest radiographs. A total of 90 patients (28 female, 62 male; mean age, 55.1 years) imaged with CR and LSDR within a mean time interval of 2.8 days +/- 3.0 were included in this study. Two independent readers evaluated the image quality of CR and LSDR based on modified European Guidelines for Quality Criteria for chest X-ray. The Wilcoxon test was used to analyse differences between the techniques. The overall image quality of LSDR was significantly better than the quality of CR (9.75 vs 8.16 of a maximum score of 10; p < 0.001). LSDR performed significantly better than CR for delineation of anatomical structures in the mediastinum and the retrocardiac lung (p < 0.001). CR was superior to LSDR for visually sharp delineation of the lung vessels and the thin linear structures in the lungs. We conclude that LSDR yields better image quality and may be more suitable for excluding significant pathological features of the chest in areas with high attenuation compared with CR.