877 resultados para IMAGE PATTERN CLASSIFICATION
Resumo:
X-ray fluoroscopy is essential in both diagnosis and medical intervention, although it may contribute to significant radiation doses to patients that have to be optimised and justified. Therefore, it is crucial to the patient to be exposed to the lowest achievable dose without compromising the image quality. The purpose of this study was to perform an analysis of the quality control measurements, particularly dose rates, contrast and spatial resolution of Portuguese fluoroscopy equipment and also to provide a contribution to the establishment of reference levels for the equipment performance parameters. Measurements carried out between 2007 and 2013 on 143 fluoroscopy equipment distributed by 34 nationwide health units were analysed. The measurements suggest that image quality and dose rates of Portuguese equipment are congruent with other studies, and in general, they are as per the Portuguese law. However, there is still a possibility of improvements intending optimisation at a national level.
Resumo:
Medical imaging is a powerful diagnostic tool. Consequently, the number of medical images taken has increased vastly over the past few decades. The most common medical imaging techniques use X-radiation as the primary investigative tool. The main limitation of using X-radiation is associated with the risk of developing cancers. Alongside this, technology has advanced and more centres now use CT scanners; these can incur significant radiation burdens compared with traditional X-ray imaging systems. The net effect is that the population radiation burden is rising steadily. Risk arising from X-radiation for diagnostic medical purposes needs minimising and one way to achieve this is through reducing radiation dose whilst optimising image quality. All ages are affected by risk from X-radiation however the increasing population age highlights the elderly as a new group that may require consideration. Of greatest concern are paediatric patients: firstly they are more sensitive to radiation; secondly their younger age means that the potential detriment to this group is greater. Containment of radiation exposure falls to a number of professionals within medical fields, from those who request imaging to those who produce the image. These staff are supported in their radiation protection role by engineers, physicists and technicians. It is important to realise that radiation protection is currently a major European focus of interest and minimum competence levels in radiation protection for radiographers have been defined through the integrated activities of the EU consortium called MEDRAPET. The outcomes of this project have been used by the European Federation of Radiographer Societies to describe the European Qualifications Framework levels for radiographers in radiation protection. Though variations exist between European countries radiographers and nuclear medicine technologists are normally the professional groups who are responsible for exposing screening populations and patients to X-radiation. As part of their training they learn fundamental principles of radiation protection and theoretical and practical approaches to dose minimisation. However dose minimisation is complex – it is not simply about reducing X-radiation without taking into account major contextual factors. These factors relate to the real world of clinical imaging and include the need to measure clinical image quality and lesion visibility when applying X-radiation dose reduction strategies. This requires the use of validated psychological and physics techniques to measure clinical image quality and lesion perceptibility.
Resumo:
The definition and programming of distributed applications has become a major research issue due to the increasing availability of (large scale) distributed platforms and the requirements posed by the economical globalization. However, such a task requires a huge effort due to the complexity of the distributed environments: large amount of users may communicate and share information across different authority domains; moreover, the “execution environment” or “computations” are dynamic since the number of users and the computational infrastructure change in time. Grid environments, in particular, promise to be an answer to deal with such complexity, by providing high performance execution support to large amount of users, and resource sharing across different organizations. Nevertheless, programming in Grid environments is still a difficult task. There is a lack of high level programming paradigms and support tools that may guide the application developer and allow reusability of state-of-the-art solutions. Specifically, the main goal of the work presented in this thesis is to contribute to the simplification of the development cycle of applications for Grid environments by bringing structure and flexibility to three stages of that cycle through a commonmodel. The stages are: the design phase, the execution phase, and the reconfiguration phase. The common model is based on the manipulation of patterns through pattern operators, and the division of both patterns and operators into two categories, namely structural and behavioural. Moreover, both structural and behavioural patterns are first class entities at each of the aforesaid stages. At the design phase, patterns can be manipulated like other first class entities such as components. This allows a more structured way to build applications by reusing and composing state-of-the-art patterns. At the execution phase, patterns are units of execution control: it is possible, for example, to start or stop and to resume the execution of a pattern as a single entity. At the reconfiguration phase, patterns can also be manipulated as single entities with the additional advantage that it is possible to perform a structural reconfiguration while keeping some of the behavioural constraints, and vice-versa. For example, it is possible to replace a behavioural pattern, which was applied to some structural pattern, with another behavioural pattern. In this thesis, besides the proposal of the methodology for distributed application development, as sketched above, a definition of a relevant set of pattern operators was made. The methodology and the expressivity of the pattern operators were assessed through the development of several representative distributed applications. To support this validation, a prototype was designed and implemented, encompassing some relevant patterns and a significant part of the patterns operators defined. This prototype was based in the Triana environment; Triana supports the development and deployment of distributed applications in the Grid through a dataflow-based programming model. Additionally, this thesis also presents the analysis of a mapping of some operators for execution control onto the Distributed Resource Management Application API (DRMAA). This assessment confirmed the suitability of the proposed model, as well as the generality and flexibility of the defined pattern operators
Resumo:
The goal of this study is to analyze the dynamical properties of financial data series from nineteen worldwide stock market indices (SMI) during the period 1995–2009. SMI reveal a complex behavior that can be explored since it is available a considerable volume of data. In this paper is applied the window Fourier transform and methods of fractional calculus. The results reveal classification patterns typical of fractional order systems.
Resumo:
Wireless Sensor Networks (WSNs) are increasingly used in various application domains like home-automation, agriculture, industries and infrastructure monitoring. As applications tend to leverage larger geographical deployments of sensor networks, the availability of an intuitive and user friendly programming abstraction becomes a crucial factor in enabling faster and more efficient development, and reprogramming of applications. We propose a programming pattern named sMapReduce, inspired by the Google MapReduce framework, for mapping application behaviors on to a sensor network and enabling complex data aggregation. The proposed pattern requires a user to create a network-level application in two functions: sMap and Reduce, in order to abstract away from the low-level details without sacrificing the control to develop complex logic. Such a two-fold division of programming logic is a natural-fit to typical sensor networking operation which makes sensing and topological modalities accessible to the user.
Resumo:
The discovery of X-rays was undoubtedly one of the greatest stimulus for improving the efficiency in the provision of healthcare services. The ability to view, non-invasively, inside the human body has greatly facilitated the work of professionals in diagnosis of diseases. The exclusive focus on image quality (IQ), without understanding how they are obtained, affect negatively the efficiency in diagnostic radiology. The equilibrium between the benefits and the risks are often forgotten. It is necessary to adopt optimization strategies to maximize the benefits (image quality) and minimize risk (dose to the patient) in radiological facilities. In radiology, the implementation of optimization strategies involves an understanding of images acquisition process. When a radiographer adopts a certain value of a parameter (tube potential [kVp], tube current-exposure time product [mAs] or additional filtration), it is essential to know its meaning and impact of their variation in dose and image quality. Without this, any optimization strategy will be a failure. Worldwide, data show that use of x-rays has been increasingly frequent. In Cabo Verde, we note an effort by healthcare institutions (e.g. Ministry of Health) in equipping radiological facilities and the recent installation of a telemedicine system requires purchase of new radiological equipment. In addition, the transition from screen-films to digital systems is characterized by a raise in patient exposure. Given that this transition is slower in less developed countries, as is the case of Cabo Verde, the need to adopt optimization strategies becomes increasingly necessary. This study was conducted as an attempt to answer that need. Although this work is about objective evaluation of image quality, and in medical practice the evaluation is usually subjective (visual evaluation of images by radiographer / radiologist), studies reported a correlation between these two types of evaluation (objective and subjective) [5-7] which accredits for conducting such studies. The purpose of this study is to evaluate the effect of exposure parameters (kVp and mAs) when using additional Cooper (Cu) filtration in dose and image quality in a Computed Radiography system.
Resumo:
The foot and the ankle are small structures commonly affected by disorders, and their complex anatomy represent significant diagnostic challenges. SPECT/CT Image fusion can provide missing anatomical and bone structure information to functional imaging, which is particularly useful to increase diagnosis certainty of bone pathology. However, due to SPECT acquisition duration, patient’s involuntary movements may lead to misalignment between SPECT and CT images. Patient motion can be reduced using a dedicated patient support. We aimed at designing an ankle and foot immobilizing device and measuring its efficacy at improving image fusion. Methods: We enrolled 20 patients undergoing distal lower-limb SPECT/CT of the ankle and the foot with and without a foot holder. The misalignment between SPECT and CT images was computed by manually measuring 14 fiducial markers chosen among anatomical landmarks also visible on bone scintigraphy. Analysis of variance was performed for statistical analysis. Results: The obtained absolute average difference without and with support was 5.1±5.2 mm (mean±SD) and 3.1±2.7 mm, respectively, which is significant (p<0.001). Conclusion: The introduction of the foot holder significantly decreases misalignment between SPECT and CT images, which may have clinical influence in the precise localization of foot and ankle pathology.
Resumo:
OBJECTIVE To analyze Brazilian literature on body image and the theoretical and methodological advances that have been made. METHODS A detailed review was undertaken of the Brazilian literature on body image, selecting published articles, dissertations and theses from the SciELO, SCOPUS, LILACS and PubMed databases and the CAPES thesis database. Google Scholar was also used. There was no start date for the search, which used the following search terms: “body image” AND “Brazil” AND “scale(s)”; “body image” AND “Brazil” AND “questionnaire(s)”; “body image” AND “Brazil” AND “instrument(s)”; “body image” limited to Brazil and “body image”. RESULTS The majority of measures available were intended to be used in college students, with half of them evaluating satisfaction/dissatisfaction with the body. Females and adolescents of both sexes were the most studied population. There has been a significant increase in the number of available instruments. Nevertheless, numerous published studies have used non-validated instruments, with much confusion in the use of the appropriate terms (e.g., perception, dissatisfaction, distortion). CONCLUSIONS Much more is needed to understand body image within the Brazilian population, especially in terms of evaluating different age groups and diversifying the components/dimensions assessed. However, interest in this theme is increasing, and important steps have been taken in a short space of time.
Resumo:
OBJECTIVE To comprehend the perception of body image in adolescence. METHODS A qualitative study was conducted with eight focus groups with 96 students of both sexes attending four public elementary school institutions in the city of Rio de Janeiro, Southeastern Brazil, in 2013. An interview guide with questions about the adolescents’ feelings in relation to: their bodies, standards of idealized beauty, practice of physical exercise and sociocultural influences on self-image. In the data analysis we sought to understand and interpret the meanings and contradictions of narratives, understanding the subjects’ context and reasons and the internal logic of the group. RESULTS Three thematic categories were identified. The influence of media on body image showed the difficulty of achieving the perfect body and is viewed with suspicion in face of standards of beauty broadcast; the importance of a healthy body was observed as standards of beauty and good looks were closely linked to good physical condition and result from having a healthy body; the relationship between the standard of beauty and prejudice, as people who are not considered attractive, having small physical imperfections, are discriminated against and can be rejected or even excluded from society. CONCLUSIONS The standard of perfect body propagated by media influences adolescents’ self-image and, consequently, self-esteem and is considered an unattainable goal, corresponding to a standard of beauty described as artificial and unreal. However, it causes great suffering and discrimination against those who do not feel they are attractive, which can lead to health problems resulting from low self-esteem.
Resumo:
Purpose: To compare image quality and effective dose when the 10 kVp rule is applied with manual and AEC mode in PA chest X-ray. Methods and Materials: A total of 68 images (with and without lesions) were acquired of an anthropomorphic chest phantom in a Wolverson Arcoma X-ray unit. The images were evaluated against a reference image using image quality criteria and the 2 alternative forced choice (2 AFC) method by five radiographers. The effective dose was calculated using PCXMC software using the exposure parameters and DAP. The exposure index (lgM) was recorded. Results: Exposure time decreases considerably when applying the 10 kVp rule in manual mode (50%-28%) compared to AEC mode (36%-23%). Statistical differences for effective dose between several AEC modes were found (p=0.002). The effective dose is lower when using only the right AEC ionization chamber. Considering image quality, there are no statistical differences (p=0.348) between the different AEC modes for images with no lesions. Using a higher kVp value the lgM values will also increase. The lgM values showed significant statistical differences (p=0.000). The image quality scores did not present statistically significant differences (p=0.043) for the images with lesions when comparing manual with AEC modes. Conclusion: In general, the dose is lower in the manual mode. By using the right AEC ionising chamber the effective dose will be the lowest in comparison to other ionising chambers. The use of the 10 kVp rule did not affect the detectability of the lesions.
Resumo:
Purpose: To assess image quality using PGMI (perfect, good, moderate, inadequate) scale in digital mammography examinations acquired in DR systems. Identify the main failures and propose corrective actions. Evaluate the most typical breast density. Methods and Materials: Clinical image quality criteria were evaluated considering mammograms acquired in 13 DR systems and classified according to PGMI scale using the criteria described in European Commission guidelines for radiographers. The breast density was assessed according to ACR recommendations. The data were collected on the acquisition system monitor to reproduce the daily practice of the radiographer. Results: The image quality criteria were evaluated in 3044 images. The criteria were fully achieved in 41% of the images that were classified as P (perfect), 31 % of the images were classified as M (moderate), 20% G (good) and 9% I (inadequate). The main cause of inadequate image quality was absence of all breast tissue in the image, skin folders in the pectoral muscle and in the infra-mammary angle. The higher number of failures occurred in MLO projections (809 out of 1022). The most represented (36%) breast type was type 2 (25-50% glandular tissue). Conclusion: Incorrect radiographic technique was frequently detected suggesting potential training needs and poor communication between the team members (radiographer and radiologists). Further correlations are necessary to identify the main causes for the failures, namely specific education and training in digital mammography and workload.
Resumo:
Optimization problems arise in science, engineering, economy, etc. and we need to find the best solutions for each reality. The methods used to solve these problems depend on several factors, including the amount and type of accessible information, the available algorithms for solving them, and, obviously, the intrinsic characteristics of the problem. There are many kinds of optimization problems and, consequently, many kinds of methods to solve them. When the involved functions are nonlinear and their derivatives are not known or are very difficult to calculate, these methods are more rare. These kinds of functions are frequently called black box functions. To solve such problems without constraints (unconstrained optimization), we can use direct search methods. These methods do not require any derivatives or approximations of them. But when the problem has constraints (nonlinear programming problems) and, additionally, the constraint functions are black box functions, it is much more difficult to find the most appropriate method. Penalty methods can then be used. They transform the original problem into a sequence of other problems, derived from the initial, all without constraints. Then this sequence of problems (without constraints) can be solved using the methods available for unconstrained optimization. In this chapter, we present a classification of some of the existing penalty methods and describe some of their assumptions and limitations. These methods allow the solving of optimization problems with continuous, discrete, and mixing constraints, without requiring continuity, differentiability, or convexity. Thus, penalty methods can be used as the first step in the resolution of constrained problems, by means of methods that typically are used by unconstrained problems. We also discuss a new class of penalty methods for nonlinear optimization, which adjust the penalty parameter dynamically.
Resumo:
OBJECTIVE To investigate the factors related to the granting of preliminary court orders [injunctions] in drug litigations. METHODS A retrospective descriptive study of drug lawsuits in the State of Minas Gerais, Southeastern Brazil, was conducted from October 1999 to 2009. The database consists of 6,112 lawsuits, out of which 6,044 had motions for injunctions and 5,167 included the requisition of drugs. Those with more than one beneficiary were excluded, which totaled 5,072 examined suits. The variables for complete, partial, and suppressed motions were treated as dependent and assessed in relation to those that were independent – lawsuits (year, type, legal representation, defendant, court in which it was filed, adjudication time), drugs (level five of the anatomical therapeutic chemical classification), and diseases (chapter of the International Classification of Diseases). Statistical analyses were performed using the Chi-square test. RESULTS Out of the 5,072 lawsuits with injunctions, 4,184 (82.5%) had the injunctions granted. Granting varied from 95.8% of the total lawsuits in 2004 to 76.9% in 2008. Where there was legal representation, granting exceeded 80.0% and in lawsuits without representation, it did not exceed 66.9%. In public civil actions (89.1%), granting was higher relative to ordinary lawsuits (82.8%) and injunctions (80.1%). Federal courts granted only 68.6% of the injunctions, while the state courts granted 84.8%. Diseases of the digestive system and neoplasms received up to 87.0% in granting, while diseases of the nervous system, mental and behavioral disorders, and diseases of the skin and subcutaneous tissue received granting below 78.6% and showed a high proportion of suspended injunctions (10.9%). Injunctions involving paroxetine, somatropin, and ferrous sulfate drugs were all granted, while less than 54.0% of those involving escitalopram, sodium diclofenac, and nortriptyline were granted. CONCLUSIONS There are significant differences in the granting of injunctions, depending on the procedural and clinical variances. Important trends in the pattern of judicial action were observed, particularly, in the reduced granting [of injunctions] over the period.
Resumo:
In video communication systems, the video signals are typically compressed and sent to the decoder through an error-prone transmission channel that may corrupt the compressed signal, causing the degradation of the final decoded video quality. In this context, it is possible to enhance the error resilience of typical predictive video coding schemes using as inspiration principles and tools from an alternative video coding approach, the so-called Distributed Video Coding (DVC), based on the Distributed Source Coding (DSC) theory. Further improvements in the decoded video quality after error-prone transmission may also be obtained by considering the perceptual relevance of the video content, as distortions occurring in different regions of a picture have a different impact on the user's final experience. In this context, this paper proposes a Perceptually Driven Error Protection (PDEP) video coding solution that enhances the error resilience of a state-of-the-art H.264/AVC predictive video codec using DSC principles and perceptual considerations. To increase the H.264/AVC error resilience performance, the main technical novelties brought by the proposed video coding solution are: (i) design of an improved compressed domain perceptual classification mechanism; (ii) design of an improved transcoding tool for the DSC-based protection mechanism; and (iii) integration of a perceptual classification mechanism in an H.264/AVC compliant codec with a DSC-based error protection mechanism. The performance results obtained show that the proposed PDEP video codec provides a better performing alternative to traditional error protection video coding schemes, notably Forward Error Correction (FEC)-based schemes. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In the last twenty years genetic algorithms (GAs) were applied in a plethora of fields such as: control, system identification, robotics, planning and scheduling, image processing, and pattern and speech recognition (Bäck et al., 1997). In robotics the problems of trajectory planning, collision avoidance and manipulator structure design considering a single criteria has been solved using several techniques (Alander, 2003). Most engineering applications require the optimization of several criteria simultaneously. Often the problems are complex, include discrete and continuous variables and there is no prior knowledge about the search space. These kind of problems are very more complex, since they consider multiple design criteria simultaneously within the optimization procedure. This is known as a multi-criteria (or multiobjective) optimization, that has been addressed successfully through GAs (Deb, 2001). The overall aim of multi-criteria evolutionary algorithms is to achieve a set of non-dominated optimal solutions known as Pareto front. At the end of the optimization procedure, instead of a single optimal (or near optimal) solution, the decision maker can select a solution from the Pareto front. Some of the key issues in multi-criteria GAs are: i) the number of objectives, ii) to obtain a Pareto front as wide as possible and iii) to achieve a Pareto front uniformly spread. Indeed, multi-objective techniques using GAs have been increasing in relevance as a research area. In 1989, Goldberg suggested the use of a GA to solve multi-objective problems and since then other researchers have been developing new methods, such as the multi-objective genetic algorithm (MOGA) (Fonseca & Fleming, 1995), the non-dominated sorted genetic algorithm (NSGA) (Deb, 2001), and the niched Pareto genetic algorithm (NPGA) (Horn et al., 1994), among several other variants (Coello, 1998). In this work the trajectory planning problem considers: i) robots with 2 and 3 degrees of freedom (dof ), ii) the inclusion of obstacles in the workspace and iii) up to five criteria that are used to qualify the evolving trajectory, namely the: joint traveling distance, joint velocity, end effector / Cartesian distance, end effector / Cartesian velocity and energy involved. These criteria are used to minimize the joint and end effector traveled distance, trajectory ripple and energy required by the manipulator to reach at destination point. Bearing this ideas in mind, the paper addresses the planning of robot trajectories, meaning the development of an algorithm to find a continuous motion that takes the manipulator from a given starting configuration up to a desired end position without colliding with any obstacle in the workspace. The chapter is organized as follows. Section 2 describes the trajectory planning and several approaches proposed in the literature. Section 3 formulates the problem, namely the representation adopted to solve the trajectory planning and the objectives considered in the optimization. Section 4 studies the algorithm convergence. Section 5 studies a 2R manipulator (i.e., a robot with two rotational joints/links) when the optimization trajectory considers two and five objectives. Sections 6 and 7 show the results for the 3R redundant manipulator with five goals and for other complementary experiments are described, respectively. Finally, section 8 draws the main conclusions.