977 resultados para Semi-automated road extraction
Resumo:
Automated and semi-automated accessibility evaluation tools are key to streamline the process of accessibility assessment, and ultimately ensure that software products, contents, and services meet accessibility requirements. Different evaluation tools may better fit different needs and concerns, accounting for a variety of corporate and external policies, content types, invocation methods, deployment contexts, exploitation models, intended audiences and goals; and the specific overall process where they are introduced. This has led to the proliferation of many evaluation tools tailored to specific contexts. However, tool creators, who may be not familiar with the realm of accessibility and may be part of a larger project, lack any systematic guidance when facing the implementation of accessibility evaluation functionalities. Herein we present a systematic approach to the development of accessibility evaluation tools, leveraging the different artifacts and activities of a standardized development process model (the Unified Software Development Process), and providing templates of these artifacts tailored to accessibility evaluation tools. The work presented specially considers the work in progress in this area by the W3C/WAI Evaluation and Report Working Group (ERT WG)
Resumo:
Las terminales de contenedores son sistemas complejos en los que un elevado número de actores económicos interactúan para ofrecer servicios de alta calidad bajo una estricta planificación y objetivos económicos. Las conocidas como "terminales de nueva generación" están diseñadas para prestar servicio a los mega-buques, que requieren tasas de productividad que alcanzan los 300 movimientos/ hora. Estas terminales han de satisfacer altos estándares dado que la competitividad entre terminales es elevada. Asegurar la fiabilidad de las planificaciones del atraque es clave para atraer clientes, así como reducir al mínimo el tiempo que el buque permanece en el puerto. La planificación de las operaciones es más compleja que antaño, y las tolerancias para posibles errores, menores. En este contexto, las interrupciones operativas deben reducirse al mínimo. Las principales causas de dichas perturbaciones operacionales, y por lo tanto de incertidumbre, se identifican y caracterizan en esta investigación. Existen una serie de factores que al interactuar con la infraestructura y/o las operaciones desencadenan modos de fallo o parada operativa. Los primeros pueden derivar no solo en retrasos en el servicio sino que además puede tener efectos colaterales sobre la reputación de la terminal, o incluso gasto de tiempo de gestión, todo lo cual supone un impacto para la terminal. En el futuro inmediato, la monitorización de las variables operativas presenta gran potencial de cara a mejorar cualitativamente la gestión de las operaciones y los modelos de planificación de las terminales, cuyo nivel de automatización va en aumento. La combinación del criterio experto con instrumentos que proporcionen datos a corto y largo plazo es fundamental para el desarrollo de herramientas que ayuden en la toma de decisiones, ya que de este modo estarán adaptadas a las auténticas condiciones climáticas y operativas que existen en cada emplazamiento. Para el corto plazo se propone una metodología con la que obtener predicciones de parámetros operativos en terminales de contenedores. Adicionalmente se ha desarrollado un caso de estudio en el que se aplica el modelo propuesto para obtener predicciones de la productividad del buque. Este trabajo se ha basado íntegramente en datos proporcionados por una terminal semi-automatizada española. Por otro lado, se analiza cómo gestionar, evaluar y mitigar el efecto de las interrupciones operativas a largo plazo a través de la evaluación del riesgo, una forma interesante de evaluar el effecto que eventos inciertos pero probables pueden generar sobre la productividad a largo plazo de la terminal. Además se propone una definición de riesgo operativo junto con una discusión de los términos que representan con mayor fidelidad la naturaleza de las actividades y finalmente, se proporcionan directrices para gestionar los resultados obtenidos. Container terminals are complex systems where a large number of factors and stakeholders interact to provide high-quality services under rigid planning schedules and economic objectives. The socalled next generation terminals are conceived to serve the new mega-vessels, which are demanding productivity rates up to 300 moves/hour. These terminals need to satisfy high standards because competition among terminals is fierce. Ensuring reliability in berth scheduling is key to attract clients, as well as to reduce at a minimum the time that vessels stay the port. Because of the aforementioned, operations planning is becoming more complex, and the tolerances for errors are smaller. In this context, operational disturbances must be reduced at a minimum. The main sources of operational disruptions and thus, of uncertainty, are identified and characterized in this study. External drivers interact with the infrastructure and/or the activities resulting in failure or stoppage modes. The later may derive not only in operational delays but in collateral and reputation damage or loss of time (especially management times), all what implies an impact for the terminal. In the near future, the monitoring of operational variables has great potential to make a qualitative improvement in the operations management and planning models of terminals that use increasing levels of automation. The combination of expert criteria with instruments that provide short- and long-run data is fundamental for the development of tools to guide decision-making, since they will be adapted to the real climatic and operational conditions that exist on site. For the short-term a method to obtain operational parameter forecasts in container terminals. To this end, a case study is presented, in which forecasts of vessel performance are obtained. This research has been entirely been based on data gathered from a semi-automated container terminal from Spain. In the other hand it is analyzed how to manage, evaluate and mitigate disruptions in the long-term by means of the risk assessment, an interesting approach to evaluate the effect of uncertain but likely events on the long-term throughput of the terminal. In addition, a definition for operational risk evaluation in port facilities is proposed along with a discussion of the terms that better represent the nature of the activities involved and finally, guidelines to manage the results obtained are provided.
Resumo:
Whole genome linkage analysis of type 1 diabetes using affected sib pair families and semi-automated genotyping and data capture procedures has shown how type 1 diabetes is inherited. A major proportion of clustering of the disease in families can be accounted for by sharing of alleles at susceptibility loci in the major histocompatibility complex on chromosome 6 (IDDM1) and at a minimum of 11 other loci on nine chromosomes. Primary etiological components of IDDM1, the HLA-DQB1 and -DRB1 class II immune response genes, and of IDDM2, the minisatellite repeat sequence in the 5' regulatory region of the insulin gene on chromosome 11p15, have been identified. Identification of the other loci will involve linkage disequilibrium mapping and sequencing of candidate genes in regions of linkage.
Resumo:
Customizing shoe manufacturing is one of the great challenges in the footwear industry. It is a production model change where design adopts not only the main role, but also the main bottleneck. It is therefore necessary to accelerate this process by improving the accuracy of current methods. Rapid prototyping techniques are based on the reuse of manufactured footwear lasts so that they can be modified with CAD systems leading rapidly to new shoe models. In this work, we present a shoe last fast reconstruction method that fits current design and manufacturing processes. The method is based on the scanning of shoe last obtaining sections and establishing a fixed number of landmarks onto those sections to reconstruct the shoe last 3D surface. Automated landmark extraction is accomplished through the use of the self-organizing network, the growing neural gas (GNG), which is able to topographically map the low dimensionality of the network to the high dimensionality of the contour manifold without requiring a priori knowledge of the input space structure. Moreover, our GNG landmark method is tolerant to noise and eliminates outliers. Our method accelerates up to 12 times the surface reconstruction and filtering processes used by the current shoe last design software. The proposed method offers higher accuracy compared with methods with similar efficiency as voxel grid.
Resumo:
In this study, we utilise a novel approach to segment out the ventricular system in a series of high resolution T1-weighted MR images. We present a brain ventricles fast reconstruction method. The method is based on the processing of brain sections and establishing a fixed number of landmarks onto those sections to reconstruct the ventricles 3D surface. Automated landmark extraction is accomplished through the use of the self-organising network, the growing neural gas (GNG), which is able to topographically map the low dimensionality of the network to the high dimensionality of the contour manifold without requiring a priori knowledge of the input space structure. Moreover, our GNG landmark method is tolerant to noise and eliminates outliers. Our method accelerates the classical surface reconstruction and filtering processes. The proposed method offers higher accuracy compared to methods with similar efficiency as Voxel Grid.
Resumo:
Background Many clinical trials of DC-based immunotherapy involve administration of monocyte-derived DCs (Mo-DC) on multiple occasions. We aimed to determine the optimal cell processing procedures and timing (leukapheresis, RBC depletion and cryopreservation) for generation of Mo-DC for clinical purposes. Methods Leukapheresis was undertaken using a COBE Spectra. Two instrument settings were compared - the standard semi-automated software (Version 4.7) (n = 10) and the fully automated software (Version 6.0) (n = 40). Density gradient centrifugation using Ficoll, Percoll, a combination of these methods or neither for RBC depletion were compared. Outcomes (including cell yield and purity) were compared for cryopreserved unmanipulated monocytes and cryopreserved Mo-DC. Results Software Version 6.0 provided significantly better enrichment for monocytes (P
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Left ventricular (LV) volumes have important prognostic implications in patients with chronic ischemic heart disease. We sought to examine the accuracy and reproducibility of real-time 3D echo (RT-3DE) compared to TI-201 single photon emission computed tomography (SPECT) and cardiac magnetic resonance imaging (MRI). Thirty (n = 30) patients (age 62±9 years, 23 men) with chronic ischemic heart disease underwent LV volume assessment with RT-3DE, SPECT, and MRI. Ano vel semi-automated border detection algorithmwas used by RT-3DE. End diastolic volumes (EDV) and end systolic volumes (ESV) measured by RT3DE and SPECT were compared to MRI as the standard of reference. RT-3DE and SPECT volumes showed excellent correlation with MRI (Table). Both RT- 3DE and SPECT underestimated LV volumes compared to MRI (ESV, SPECT 74±58 ml versus RT-3DE 95±48 ml versus MRI 96±54 ml); (EDV, SPECT 121±61 ml versus RT-3DE 169±61 ml versus MRI 179±56 ml). The degree of ESV underestimation with RT-3DE was not significant.
Resumo:
Deformable models are a highly accurate and flexible approach to segmenting structures in medical images. The primary drawback of deformable models is that they are sensitive to initialisation, with accurate and robust results often requiring initialisation close to the true object in the image. Automatically obtaining a good initialisation is problematic for many structures in the body. The cartilages of the knee are a thin elastic material that cover the ends of the bone, absorbing shock and allowing smooth movement. The degeneration of these cartilages characterize the progression of osteoarthritis. The state of the art in the segmentation of the cartilage are 2D semi-automated algorithms. These algorithms require significant time and supervison by a clinical expert, so the development of an automatic segmentation algorithm for the cartilages is an important clinical goal. In this paper we present an approach towards this goal that allows us to automatically providing a good initialisation for deformable models of the patella cartilage, by utilising the strong spatial relationship of the cartilage to the underlying bone.
Resumo:
The recording of visual acuity using the Snellen letter chart is only a limited measure of the visual performance of an eye wearing a refractive aid. Qualitative in addition to quantitative information is required to establish such a parameter: spatial, temporal and photometric aspects must all be incorporated into the test procedure. The literature relating to the correction of ametropia by refractive aids was reviewed. Selected aspects of a comparison between the correction provided by spectacles and contact lenses were considered. Special attention was directed to soft hydrophilic contact lenses. Despite technological advances which have produced physiologically acceptable soft lenses, there still remain associated with this recent form of refractive aid unpredictable visual factors. Several techniques for vision assessment were described, and previous studies of visual performance were discussed. To facilitate the investigation of visual performance in a clinical environment, a new semi-automated system was described: this utilized the presentation of broken ring test stimuli on a television screen. The research project comprised two stages. Initial work was concerned with the validation of the television system, including the optimization of its several operational variables. The second phase involved the utilization of the system in an investigation of visual performance aspects of the first month of regular daily soft contact lens wear by experimentally-naive subjects. On the basis of the results of this work an ‘homoeostatic’ model has been proposed to represent the strategy which an observer adopts in order to optimize his visual performance with soft contact lenses.
Resumo:
The primary aim of this thesis was to investigate the in vivo ocular morphological and contractile changes occurring within the accommodative apparatus prior to the onset of presbyopia, with particular reference to ciliary muscle changes with age and the origin of a myopic shift in refraction during incipient presbyopia. Commissioned semi-automated software proved capable of extracting accurate and repeatable measurements from crystalline lens and ciliary muscle Anterior Segment Optical Coherence Tomography (AS-OCT) images and reduced the subjectivity of AS-OCT image analysis. AS-OCT was utilised to document longitudinal changes in ciliary muscle morphology within an incipient presbyopic population (n=51). A significant antero-inwards shift of ciliary muscle mass was observed after 2.5 years. Furthermore, in a subgroup study (n=20), an accommodative antero-inwards movement of ciliary muscle mass was evident. After 2.5 years, the centripetal response of the ciliary muscle significantly attenuated during accommodation, whereas the antero-posterior mobility of the ciliary muscle remained invariant. Additionally, longitudinal measurement of ocular biometry revealed a significant increase in crystalline lens thickness and a corresponding decrease in anterior chamber depth after 2.5 years (n=51). Lenticular changes appear to be determinant of changes in refraction during incipient presbyopia. During accommodation, a significant increase in crystalline lens thickness and axial length was observed, whereas anterior chamber depth decreased (n=20). The change in ocular biometry per dioptre of accommodation exerted remained invariant after 2.5 years. Cross-sectional ocular biometric data were collected to quantify accommodative axial length changes from early adulthood to advanced presbyopia (n=72). Accommodative axial length elongation significantly attenuated during presbyopia, which was consistent with a significant increase in ocular rigidity during presbyopia. The studies presented in this thesis support the Helmholtz theory of accommodation and despite the reduction in centripetal ciliary muscle contractile response with age, primarily implicate lenticular changes in the development of presbyopia.
Resumo:
Purpose: To assess the inter and intra observer variability of subjective grading of the retinal arterio-venous ratio (AVR) using a visual grading and to compare the subjectively derived grades to an objective method using a semi-automated computer program. Methods: Following intraocular pressure and blood pressure measurements all subjects underwent dilated fundus photography. 86 monochromatic retinal images with the optic nerve head centred (52 healthy volunteers) were obtained using a Zeiss FF450+ fundus camera. Arterio-venous ratios (AVR), central retinal artery equivalent (CRAE) and central retinal vein equivalent (CRVE) were calculated on three separate occasions by one single observer semi-automatically using the software VesselMap (ImedosSystems, Jena, Germany). Following the automated grading, three examiners graded the AVR visually on three separate occasions in order to assess their agreement. Results: Reproducibility of the semi-automatic parameters was excellent (ICCs: 0.97 (CRAE); 0.985 (CRVE) and 0.952 (AVR)). However, visual grading of AVR showed inter grader differences as well as discrepancies between subjectively derived and objectively calculated AVR (all p < 0.000001). Conclusion: Grader education and experience leads to inter-grader differences but more importantly, subjective grading is not capable to pick up subtle differences across healthy individuals and does not represent true AVR when compared with an objective assessment method. Technology advancements mean we no longer rely on opthalmoscopic evaluation but can capture and store fundus images with retinal cameras, enabling us to measure vessel calibre more accurately compared to visual estimation; hence it should be integrated in optometric practise for improved accuracy and reliability of clinical assessments of retinal vessel calibres. © 2014 Spanish General Council of Optometry.
Resumo:
Background: Summarised retinal vessel diameters are linked to systemic vascular pathology. Monochromatic images provide best contrast to measure vessel calibres. However, when obtaining images with a dual wavelength oximeter the red-free image can be extracted as the green channel information only which in turn will reduce the number of photographs taken at a given time. This will reduce patient exposure to the camera flash and could provide sufficient quality images to reliably measure vessel calibres. Methods: We obtained retinal images of one eye of 45 healthy participants. Central retinal arteriolar and central retinal venular equivalents (CRAE and CRVE, respectively) were measured using semi-automated software from two monochromatic images: one taken with a red-free filter and one extracted from the green channel of a dual wavelength oximetry image. Results: Participants were aged between 21 and 62 years, all were normotensive (SBP: 115 (12) mmHg; DBP: 72 (10) mmHg) and had normal intra-ocular pressures (12 (3) mmHg). Bland-Altman analysis revealed good agreement of CRAE and CRVE as obtained from both images (mean bias CRAE = 0.88; CRVE = 2.82). Conclusions: Summarised retinal vessel calibre measurements obtained from oximetry images are in good agreement to those obtained using red-free photographs.
Resumo:
ACM Computing Classification System (1998): J.3.
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^