900 resultados para Buildings -- Repair an reconstruction -- Contests
Resumo:
In this paper, an improved technique for evolving wavelet coefficients refined for compression and reconstruction of fingerprint images is presented. The FBI fingerprint compression standard [1, 2] uses the cdf 9/7 wavelet filter coefficients. Lifting scheme is an efficient way to represent classical wavelets with fewer filter coefficients [3, 4]. Here Genetic algorithm (GA) is used to evolve better lifting filter coefficients for cdf 9/7 wavelet to compress and reconstruct fingerprint images with better quality. Since the lifting filter coefficients are few in numbers compared to the corresponding classical wavelet filter coefficients, they are evolved at a faster rate using GA. A better reconstructed image quality in terms of Peak-Signal-to-Noise-Ratio (PSNR) is achieved with the best lifting filter coefficients evolved for a compression ratio 16:1. These evolved coefficients perform well for other compression ratios also.
Resumo:
This paper investigates the linear degeneracies of projective structure estimation from point and line features across three views. We show that the rank of the linear system of equations for recovering the trilinear tensor of three views reduces to 23 (instead of 26) in the case when the scene is a Linear Line Complex (set of lines in space intersecting at a common line) and is 21 when the scene is planar. The LLC situation is only linearly degenerate, and we show that one can obtain a unique solution when the admissibility constraints of the tensor are accounted for. The line configuration described by an LLC, rather than being some obscure case, is in fact quite typical. It includes, as a particular example, the case of a camera moving down a hallway in an office environment or down an urban street. Furthermore, an LLC situation may occur as an artifact such as in direct estimation from spatio-temporal derivatives of image brightness. Therefore, an investigation into degeneracies and their remedy is important also in practice.
Resumo:
This paper presents a new paradigm for signal reconstruction and superresolution, Correlation Kernel Analysis (CKA), that is based on the selection of a sparse set of bases from a large dictionary of class- specific basis functions. The basis functions that we use are the correlation functions of the class of signals we are analyzing. To choose the appropriate features from this large dictionary, we use Support Vector Machine (SVM) regression and compare this to traditional Principal Component Analysis (PCA) for the tasks of signal reconstruction, superresolution, and compression. The testbed we use in this paper is a set of images of pedestrians. This paper also presents results of experiments in which we use a dictionary of multiscale basis functions and then use Basis Pursuit De-Noising to obtain a sparse, multiscale approximation of a signal. The results are analyzed and we conclude that 1) when used with a sparse representation technique, the correlation function is an effective kernel for image reconstruction and superresolution, 2) for image compression, PCA and SVM have different tradeoffs, depending on the particular metric that is used to evaluate the results, 3) in sparse representation techniques, L_1 is not a good proxy for the true measure of sparsity, L_0, and 4) the L_epsilon norm may be a better error metric for image reconstruction and compression than the L_2 norm, though the exact psychophysical metric should take into account high order structure in images.
Resumo:
Photo-mosaicing techniques have become popular for seafloor mapping in various marine science applications. However, the common methods cannot accurately map regions with high relief and topographical variations. Ortho-mosaicing borrowed from photogrammetry is an alternative technique that enables taking into account the 3-D shape of the terrain. A serious bottleneck is the volume of elevation information that needs to be estimated from the video data, fused, and processed for the generation of a composite ortho-photo that covers a relatively large seafloor area. We present a framework that combines the advantages of dense depth-map and 3-D feature estimation techniques based on visual motion cues. The main goal is to identify and reconstruct certain key terrain feature points that adequately represent the surface with minimal complexity in the form of piecewise planar patches. The proposed implementation utilizes local depth maps for feature selection, while tracking over several views enables 3-D reconstruction by bundle adjustment. Experimental results with synthetic and real data validate the effectiveness of the proposed approach
Resumo:
La captación de glucosa y su conversión en lactato juega un papel fundamental en el metabolismo tumoral, independientemente de la concentración de oxígeno presente en el tejido (efecto Warburg). Sin embrago, dicha captación varía de un tipo tumoral a otro, y dentro del mismo tumor, situación que podría depender de las características microambientales tumorales (fluctuaciones de oxígeno, presencia de otros tipos celulares) y de factores estresores asociados a los tratamientos. Se estudió el efecto de la hipoxia-reoxigenación (HR) y las radiaciones ionizantes (RI) sobre la captación de glucosa, en cultivos de líneas tumorales MCF-7 y HT-29, cultivadas de forma aislada o en cocultivo con la línea celular EAhy296. Se encontró que la captación de glucosa en HR es diferente para lo descrito en condiciones de hipoxia permanente y que es modificada en el cocultivo. Se identificaron poblaciones celulares dentro de la misma línea celular, de alta y baja captación de glucosa, lo que implicaría una simbiosis metabólica de la célula como respuesta adaptativa a las condiciones tumorales. Se evaluó la expresión de NRF2 y la translocación nuclear de NRF2 y HIF1a, como vías de respuesta a estrés celular e hipoxia. La translocación nuclear de las proteínas evaluadas explicaría el comportamiento metabólico de las células tumorales de seno, pero no de colon, por lo cual deben existir otras vías metabólicas implicadas. Las diferencias en el comportamiento de las células tumorales en HR en relación con hipoxia permitirá realizar planeaciones dosimétricas más dinámicas, que reevalúen las condiciones de oxigenación tumoral constantemente.
Resumo:
Introducción: Conocer y diagnosticar las variaciones más frecuentes de la vasculatura renal es de gran importancia para la planificación de la nefrectomía laparoscópica en el donante y para la reconstrucción vascular en el trasplante renal. De igual forma, considerar las variaciones vasculares −especialmente las del sistema venoso− es indispensable en reconstrucción vascular debido a la gran proporción de variaciones venosas asociadas a aneurismas de la aorta abdominal; además, es ideal en el estudio de condiciones clínicas tales como el síndrome de congestión pélvica y la hematuria. Metodología: Se trata de una revisión de la bibliografía sobre la proporción, diagnóstico, procedimientos quirúrgicos y síndromes clínicos asociados a las variaciones de la vasculatura renal, basada en el material encontrado con la siguiente estrategia de búsqueda: “Renal Artery/abnormalities”[Mesh] OR Renal Veins/abnormalities”[Mesh] AND “surgery”[Mesh] OR “transplantation”[Mesh] OR “radiography”[Mesh] “Kidney Pelvis/abnormalities”[Mesh] AND “Kidney Pelvis/blood supply”[Mesh]. Esta estrategia se modificó de acuerdo con las bases de datos: MEDLINE/PubMed, MEDLINE OVID, SCIENCEDIRECT, HINARI y LILACS. Desarrollo: Se revisó el origen y los tipos más frecuentes de variaciones de la vasculatura renal. Se investigó sobre las implicaciones quirúrgicas y los síndromes clínicos asociados.
Resumo:
Introducción: A partir de la década de los cincuenta el manejo de la enfermedad valvular presenta cambios significativos cuando se incorporan los reemplazos valvulares tanto mecánicos como biológicos dentro de las opciones de tratamiento quirúrgico (1). Las válvulas biológicas se desarrollaron como una alternativa que buscaba evitar los problemas relacionados con la anticoagulación y con la idea de utilizar un tejido que se comportara hemodinámicamente como el nativo. Este estudio está enfocado en establecer la sobrevida global y la libertad de reoperación de la válvula de los pacientes sometidos a reemplazo valvular aórtico y mitral biológicos en la Fundación Cardioinfantil - IC a 1, 3, 5 y 10 años. Materiales y métodos: Estudio de cohorte retrospectiva de supervivencia de pacientes sometidos a reemplazo valvular aórtico y/o mitral biológico intervenidos en la Fundación Cardioinfantil entre 2005 y 2013. Resultados: Se obtuvieron 919 pacientes incluidos en el análisis general y 876 (95,3%) pacientes con seguimiento efectivo para el análisis de sobrevida. La edad promedio fue 64años. La sobrevida a 1, 3, 5 y 10 años fue 95%,90%,85% y 69% respectivamente. El seguimiento efectivo para el desenlace reoperación fue del 55% y se encontró una libertad de reoperación del 99%, 96%, 93% y 81% a los 1, 3, 5 y 10 años. No hubo diferencias significativas entre la localización de la válvula ni en el tipo de válvula aortica empleada. Conclusiones: La sobrevida de los pacientes que son llevados a reemplazo valvular biológico en este estudio es comparable a grandes cohortes internacionales. La sobrevida de los pacientes llevados a reemplazo valvular con prótesis biológicas en posición mitral y aortica fue similar a 1, 3, 5 y 10 años.
Resumo:
Describes a method to code a decimated model of an isosurface on an octree representation while maintaining volume data if it is needed. The proposed technique is based on grouping the marching cubes (MC) patterns into five configurations according the topology and the number of planes of the surface that are contained in a cell. Moreover, the discrete number of planes on which the surface lays is fixed. Starting from a complete volume octree, with the isosurface codified at terminal nodes according to the new configuration, a bottom-up strategy is taken for merging cells. Such a strategy allows one to implicitly represent co-planar faces in the upper octree levels without introducing any error. At the end of this merging process, when it is required, a reconstruction strategy is applied to generate the surface contained in the octree intersected leaves. Some examples with medical data demonstrate that a reduction of up to 50% in the number of polygons can be achieved
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Resumo:
We investigate the question of how many facets are needed to represent the energy balance of an urban area by developing simplified 3-, 2- and 1-facet versions of a 4-facet energy balance model of two-dimensional streets and buildings. The 3-facet model simplifies the 4-facet model by averaging over the canyon orientation, which results in similar net shortwave and longwave balances for both wall facets, but maintains the asymmetry in the heat fluxes within the street canyon. For the 2-facet model, on the assumption that the wall and road temperatures are equal, the road and wall facets can be combined mathematically into a single street-canyon facet with effective values of the heat transfer coefficient, albedo, emissivity and thermodynamic properties, without further approximation. The 1-facet model requires the additional assumption that the roof temperature is also equal to the road and wall temperatures. Idealised simulations show that the geometry and material properties of the walls and road lead to a large heat capacity of the combined street canyon, whereas the roof behaves like a flat surface with low heat capacity. This means that the magnitude of the diurnal temperature variation of the street-canyon facets are broadly similar and much smaller than the diurnal temperature variation of the roof facets. Consequently, the approximation that the street-canyon facets have similar temperatures is sound, and the road and walls can be combined into a single facet. The roof behaves very differently and a separate roof facet is required. Consequently, the 2-facet model performs similarly to the 4-facet model, while the 1-facet model does not. The models are compared with previously published observations collected in Mexico City. Although the 3- and 2-facet models perform better than the 1-facet model, the present models are unable to represent the phase of the sensible heat flux. This result is consistent with previous model comparisons, and we argue that this feature of the data cannot be produced by a single column model. We conclude that a 2-facet model is necessary, and for numerical weather prediction sufficient, to model an urban surface, and that this conclusion is robust and therefore applicable to more general geometries.
Resumo:
This study examines the efficacy of published δ18O data from the calcite of Late Miocene surface dwelling planktonic foraminifer shells, for sea surface temperature estimates for the pre-Quaternary. The data are from 33 Late Miocene (Messinian) marine sites from a modern latitudinal gradient of 64°N to 48°S. They give estimates of SSTs in the tropics/subtropics (to 30°N and S) that are mostly cooler than present. Possible causes of this temperature discrepancy are ecological factors (e.g. calcification of shells at levels below the ocean mixed layer), taphonomic effects (e.g. diagenesis or dissolution), inaccurate estimation of Late Miocene seawater oxygen isotope composition, or a real Late Miocene cool climate. The scale of apparent cooling in the tropics suggests that the SST signal of the foraminifer calcite has been reset, at least in part, by early diagenetic calcite with higher δ18O, formed in the foraminifer shells in cool sea bottom pore waters, probably coupled with the effects of calcite formed below the mixed layer during the life of the foraminifera. This hypothesis is supported by the markedly cooler SST estimates from low latitudes—in some cases more than 9 °C cooler than present—where the gradients of temperature and the δ18O composition of seawater between sea surface and sea bottom are most marked, and where ocean surface stratification is high. At higher latitudes, particularly N and S of 30°, the temperature signal is still cooler, though maximum temperature estimates overlap with modern SSTs N and S of 40°. Comparison of SST estimates for the Late Miocene from alkenone unsaturation analysis from the eastern tropical Atlantic at Ocean Drilling Program (ODP) Site 958—which suggest a warmer sea surface by 2–4 °C, with estimates from oxygen isotopes at Deep Sea Drilling Project (DSDP) Site 366 and ODP Site 959, indicating cooler than present SSTs, also suggest a significant impact on the δ18O signal. Nevertheless, much of the original SST variation is clearly preserved in the primary calcite formed in the mixed layer, and records secular and temporal oceanographic changes at the sea surface, such as movement of the Antarctic Polar Front in the Southern Ocean. Cooler SSTs in the tropics and sub-tropics are also consistent with the Late Miocene latitude reduction in the coral reef belt and with interrupted reef growth on the Queensland Plateau of eastern Australia, though it is not possible to quantify absolute SSTs with the existing oxygen isotope data. Reconstruction of an accurate global SST dataset for Neogene time-slices from the existing published DSDP/ODP isotope data, for use in general circulation models, may require a detailed re-assessment of taphonomy at many sites.
Resumo:
The development of an urban property in the Roman town of Calleva Atrebatum (Silchester, Hampshire, England) is traced from the late 1st to the mid-3rd century AD. Three successive periods of building with their associated finds of artefacts and biological remains are described and interpreted with provisional reconstructions of the buildings. Links are provided to a copy of the Integrated Archaeological Database (IADB), archived by the Archaeology Data Service, which holds the primary excavation and finds records.
Resumo:
Silchester is the site of a major late Iron Age and Roman town (Calleva Atrebatum), situated in northern Hampshire (England (UK)) and occupied between the late first century BC and the fifth or sixth century AD. Extensive evidence of the nature of the buildings and the plan of the town was obtained from excavations undertaken between 1890 and 1909. The purpose of this study was to use soil geochemical analyses to reinforce the archaeological evidence particularly with reference to potential metal working at the site. Soil analysis has been used previously to distinguish different functions or land use activity over a site and to aid identification and interpretation of settlement features (Entwistle et al., 2000). Samples were taken from two areas of the excavation on a 1-metre grid. Firstly from an area of some 500 square metres from contexts of late first/early second century AD date throughout the entirety of a large 'town house' (House 1) from which there was prima facie evidence of metalworking.
Resumo:
This study examines the efficacy of published δ18O data from the calcite of Late Miocene surface dwelling planktonic foraminifer shells, for sea surface temperature estimates for the pre-Quaternary. The data are from 33 Late Miocene (Messinian) marine sites from a modern latitudinal gradient of 64°N to 48°S. They give estimates of SSTs in the tropics/subtropics (to 30°N and S) that are mostly cooler than present. Possible causes of this temperature discrepancy are ecological factors (e.g. calcification of shells at levels below the ocean mixed layer), taphonomic effects (e.g. diagenesis or dissolution), inaccurate estimation of Late Miocene seawater oxygen isotope composition, or a real Late Miocene cool climate. The scale of apparent cooling in the tropics suggests that the SST signal of the foraminifer calcite has been reset, at least in part, by early diagenetic calcite with higher δ18O, formed in the foraminifer shells in cool sea bottom pore waters, probably coupled with the effects of calcite formed below the mixed layer during the life of the foraminifera. This hypothesis is supported by the markedly cooler SST estimates from low latitudes—in some cases more than 9 °C cooler than present—where the gradients of temperature and the δ18O composition of seawater between sea surface and sea bottom are most marked, and where ocean surface stratification is high. At higher latitudes, particularly N and S of 30°, the temperature signal is still cooler, though maximum temperature estimates overlap with modern SSTs N and S of 40°. Comparison of SST estimates for the Late Miocene from alkenone unsaturation analysis from the eastern tropical Atlantic at Ocean Drilling Program (ODP) Site 958—which suggest a warmer sea surface by 2–4 °C, with estimates from oxygen isotopes at Deep Sea Drilling Project (DSDP) Site 366 and ODP Site 959, indicating cooler than present SSTs, also suggest a significant impact on the δ18O signal. Nevertheless, much of the original SST variation is clearly preserved in the primary calcite formed in the mixed layer, and records secular and temporal oceanographic changes at the sea surface, such as movement of the Antarctic Polar Front in the Southern Ocean. Cooler SSTs in the tropics and sub-tropics are also consistent with the Late Miocene latitude reduction in the coral reef belt and with interrupted reef growth on the Queensland Plateau of eastern Australia, though it is not possible to quantify absolute SSTs with the existing oxygen isotope data. Reconstruction of an accurate global SST dataset for Neogene time-slices from the existing published DSDP/ODP isotope data, for use in general circulation models, may require a detailed re-assessment of taphonomy at many sites.
Resumo:
Health care providers, purchasers and policy makers need to make informed decisions regarding the provision of cost-effective care. When a new health care intervention is to be compared with the current standard, an economic evaluation alongside an evaluation of health benefits provides useful information for the decision making process. We consider the information on cost-effectiveness which arises from an individual clinical trial comparing the two interventions. Recent methods for conducting a cost-effectiveness analysis for a clinical trial have focused on the net benefit parameter. The net benefit parameter, a function of costs and health benefits, is positive if the new intervention is cost-effective compared with the standard. In this paper we describe frequentist and Bayesian approaches to cost-effectiveness analysis which have been suggested in the literature and apply them to data from a clinical trial comparing laparoscopic surgery with open mesh surgery for the repair of inguinal hernias. We extend the Bayesian model to allow the total cost to be divided into a number of different components. The advantages and disadvantages of the different approaches are discussed. In January 2001, NICE issued guidance on the type of surgery to be used for inguinal hernia repair. We discuss our example in the light of this information. Copyright © 2003 John Wiley & Sons, Ltd.