607 resultados para Imatges mèdiques
Resumo:
[cast] La formulación magistral, una de las actividades profesionales más representativas del farmacéutico, consiste en la elaboración, de acuerdo con una prescripción médica, de un medicamento personalizado, adaptado a un paciente concreto, en un compromiso profesional de solucionar un problema de salud específico. La amplia oferta de medicamentos industriales ha reducido considerablemente esta actividad, que a pesar de todo, debe considerarse una herramienta de futuro en sintonía con la tendencia personalizadora actual de la medicina y las necesidades del paciente. Los conocimientos y competencias requeridas para dicha actividad profesional se introducen actualmente en la carrera de Farmacia mediante una asignatura optativa. En el presente trabajo se presenta el planteamiento metodológico diseñado por el Grupo de Innovación Docente de Tecnología Farmacéutica (GIDTF) y el grupo e-Galenica, ambos de la Universidad de Barcelona, para esta asignatura. Dicha metodología esta basada en el Aprendizaje Basado en Problemas (ABP) incluyendo tutorías y prácticas de campo, apoyada en estrategias no presenciales como foro de debate, recursos on-line, cuestionarios y tareas de autoevaluación a través de la plataforma Moodle del Campus Virtual de la UB. Se evalúan asimismo los resultados académicos y las respuestas de los estudiantes a las encuestas realizadas en relación al sistema de impartición de la asignatura. [eng] The pharmaceutical compounding, one of the most representative professional activities of pharmacists, involves the preparation of an individualized medicine tailored to a specific patient in a professional commitment to solve a specific health problem, according to a prescription. The wide range of industrial medicine has significantly reduced this activity, which nevertheless should be considered a tool of the future in line with the current trend of personalizing medicine and patient needs. The knowledge and competences required for this professional activity are introduced to the students of Pharmacy through an optional subject. In this paper we present the ethodological approach developed for this subject by the Teaching Innovation Group of pharmaceutical Technology (GIDTF) and e-Galenica group, both from the University of Barcelona. This methodology is based on Problem-Based Learning (PBL) including tutorials and practices in other centres, supported by out of class strategies as discussion forum, online resources, self-assessment questionnaires and work through the platform Moodle of Virtual Campus UB. The academic performance and student responses to surveys in relation to the didactic methodology are also assessed.
Resumo:
Aquest projecte s’emmarca dins de l’àmbit de la visió per computador, concretament en la utilització de dades de profunditat obtingudes a través d’un emissor i sensor de llum infraroja.El propòsit principal d’aquest projecte és mostrar com adaptar aquestes tecnologies, a l’abast de qualsevol particular, de forma que un usuari durant la pràctica d’una activitat esportiva concreta, rebi informació visual continua dels moviments i gestos incorrectes que està realitzant, en base a uns paràmetres prèviament establerts.L’objectiu d’aquest projecte consisteix en fer una lectura constant en temps real d’una persona practicant una selecció de diverses activitats esportives estàtiques utilitzant un sensor Kinect. A través de les dades obtingudes pel sensor Kinect i utilitzant les llibreries de “skeleton traking” proporcionades per Microsoft s’haurà d’interpretar les dades posturals obtingudes per cada tipus d’esport i indicar visualment i d’una manera intuïtiva els errors que està cometent en temps real, de manera que es vegi clarament a quina part del seu cos realitza un moviment incorrecte per tal de poder corregir-lo ràpidament. El entorn de desenvolupament que s’utilitza per desenvolupar aquesta aplicació es Microsoft Viusal Studio 2010.El llenguatge amb el qual es treballarà sobre Microsoft Visual Studio 2010 és C#
Resumo:
Peer reviewed
Resumo:
Phase encoded nano structures such as Quick Response (QR) codes made of metallic nanoparticles are suggested to be used in security and authentication applications. We present a polarimetric optical method able to authenticate random phase encoded QR codes. The system is illuminated using polarized light and the QR code is encoded using a phase-only random mask. Using classification algorithms it is possible to validate the QR code from the examination of the polarimetric signature of the speckle pattern. We used Kolmogorov-Smirnov statistical test and Support Vector Machine algorithms to authenticate the phase encoded QR codes using polarimetric signatures.
Resumo:
Changes in the angle of illumination incident upon a 3D surface texture can significantly alter its appearance, implying variations in the image texture. These texture variations produce displacements of class members in the feature space, increasing the failure rates of texture classifiers. To avoid this problem, a model-based texture recognition system which classifies textures seen from different distances and under different illumination directions is presented in this paper. The system works on the basis of a surface model obtained by means of 4-source colour photometric stereo, used to generate 2D image textures under different illumination directions. The recognition system combines coocurrence matrices for feature extraction with a Nearest Neighbour classifier. Moreover, the recognition allows one to guess the approximate direction of the illumination used to capture the test image
A new approach to segmentation based on fusing circumscribed contours, region growing and clustering
Resumo:
One of the major problems in machine vision is the segmentation of images of natural scenes. This paper presents a new proposal for the image segmentation problem which has been based on the integration of edge and region information. The main contours of the scene are detected and used to guide the posterior region growing process. The algorithm places a number of seeds at both sides of a contour allowing stating a set of concurrent growing processes. A previous analysis of the seeds permits to adjust the homogeneity criterion to the regions's characteristics. A new homogeneity criterion based on clustering analysis and convex hull construction is proposed
Resumo:
In this paper a colour texture segmentation method, which unifies region and boundary information, is proposed. The algorithm uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of non-parametric techniques of kernel density estimation (which allow to estimate the colour behaviour) and classical co-occurrence matrix based texture features. Therefore, region information is defined and accurate boundary information can be extracted to guide the segmentation process. Regions concurrently compete for the image pixels in order to segment the whole image taking both information sources into account. Furthermore, experimental results are shown which prove the performance of the proposed method
Resumo:
An unsupervised approach to image segmentation which fuses region and boundary information is presented. The proposed approach takes advantage of the combined use of 3 different strategies: the guidance of seed placement, the control of decision criterion, and the boundary refinement. The new algorithm uses the boundary information to initialize a set of active regions which compete for the pixels in order to segment the whole image. The method is implemented on a multiresolution representation which ensures noise robustness as well as computation efficiency. The accuracy of the segmentation results has been proven through an objective comparative evaluation of the method
Resumo:
This paper presents a novel technique to align partial 3D reconstructions of the seabed acquired by a stereo camera mounted on an autonomous underwater vehicle. Vehicle localization and seabed mapping is performed simultaneously by means of an Extended Kalman Filter. Passive landmarks are detected on the images and characterized considering 2D and 3D features. Landmarks are re-observed while the robot is navigating and data association becomes easier but robust. Once the survey is completed, vehicle trajectory is smoothed by a Rauch-Tung-Striebel filter obtaining an even better alignment of the 3D views and yet a large-scale acquisition of the seabed
Resumo:
A visual SLAM system has been implemented and optimised for real-time deployment on an AUV equipped with calibrated stereo cameras. The system incorporates a novel approach to landmark description in which landmarks are local sub maps that consist of a cloud of 3D points and their associated SIFT/SURF descriptors. Landmarks are also sparsely distributed which simplifies and accelerates data association and map updates. In addition to landmark-based localisation the system utilises visual odometry to estimate the pose of the vehicle in 6 degrees of freedom by identifying temporal matches between consecutive local sub maps and computing the motion. Both the extended Kalman filter and unscented Kalman filter have been considered for filtering the observations. The output of the filter is also smoothed using the Rauch-Tung-Striebel (RTS) method to obtain a better alignment of the sequence of local sub maps and to deliver a large-scale 3D acquisition of the surveyed area. Synthetic experiments have been performed using a simulation environment in which ray tracing is used to generate synthetic images for the stereo system
Resumo:
A technique for simultaneous localisation and mapping (SLAM) for large scale scenarios is presented. This solution is based on the use of independent submaps of a limited size to map large areas. In addition, a global stochastic map, containing the links between adjacent submaps, is built. The information in both levels is corrected every time a loop is closed: local maps are updated with the information from overlapping maps, and the global stochastic map is optimised by means of constrained minimisation
Resumo:
This work investigates performance of recent feature-based matching techniques when applied to registration of underwater images. Matching methods are tested versus different contrast enhancing pre-processing of images. As a result of the performed experiments for various dominating in images underwater artifacts and present deformation, the outperforming preprocessing, detection and description methods are proposed
Resumo:
We propose a probabilistic object classifier for outdoor scene analysis as a first step in solving the problem of scene context generation. The method begins with a top-down control, which uses the previously learned models (appearance and absolute location) to obtain an initial pixel-level classification. This information provides us the core of objects, which is used to acquire a more accurate object model. Therefore, their growing by specific active regions allows us to obtain an accurate recognition of known regions. Next, a stage of general segmentation provides the segmentation of unknown regions by a bottom-strategy. Finally, the last stage tries to perform a region fusion of known and unknown segmented objects. The result is both a segmentation of the image and a recognition of each segment as a given object class or as an unknown segmented object. Furthermore, experimental results are shown and evaluated to prove the validity of our proposal
Resumo:
Image segmentation of natural scenes constitutes a major problem in machine vision. This paper presents a new proposal for the image segmentation problem which has been based on the integration of edge and region information. This approach begins by detecting the main contours of the scene which are later used to guide a concurrent set of growing processes. A previous analysis of the seed pixels permits adjustment of the homogeneity criterion to the region's characteristics during the growing process. Since the high variability of regions representing outdoor scenes makes the classical homogeneity criteria useless, a new homogeneity criterion based on clustering analysis and convex hull construction is proposed. Experimental results have proven the reliability of the proposed approach
Resumo:
In this paper the authors propose a new closed contour descriptor that could be seen as a Feature Extractor of closed contours based on the Discrete Hartley Transform (DHT), its main characteristic is that uses only half of the coefficients required by Elliptical Fourier Descriptors (EFD) to obtain a contour approximation with similar error measure. The proposed closed contour descriptor provides an excellent capability of information compression useful for a great number of AI applications. Moreover it can provide scale, position and rotation invariance, and last but not least it has the advantage that both the parameterization and the reconstructed shape from the compressed set can be computed very efficiently by the fast Discrete Hartley Transform (DHT) algorithm. This Feature Extractor could be useful when the application claims for reversible features and when the user needs and easy measure of the quality for a given level of compression, scalable from low to very high quality.