973 resultados para Semi-automatic road extraction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The using of the upsurge of semantics web technologies gives a possibility for an increasing of the flexibility, extensibility and consistency of the existent industrial standards for modeling of web services. In the paper the types of semantic description of web services and the degree of their realization in BPEL4WS (Business Process Execution Language for Web Services) respectively on the abstract and executable level are treated. The methods for using of BPEL4WS for the purposes of semantic web services in the direction of their semi-automatic integration are suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To assess the inter and intra observer variability of subjective grading of the retinal arterio-venous ratio (AVR) using a visual grading and to compare the subjectively derived grades to an objective method using a semi-automated computer program. Methods: Following intraocular pressure and blood pressure measurements all subjects underwent dilated fundus photography. 86 monochromatic retinal images with the optic nerve head centred (52 healthy volunteers) were obtained using a Zeiss FF450+ fundus camera. Arterio-venous ratios (AVR), central retinal artery equivalent (CRAE) and central retinal vein equivalent (CRVE) were calculated on three separate occasions by one single observer semi-automatically using the software VesselMap (ImedosSystems, Jena, Germany). Following the automated grading, three examiners graded the AVR visually on three separate occasions in order to assess their agreement. Results: Reproducibility of the semi-automatic parameters was excellent (ICCs: 0.97 (CRAE); 0.985 (CRVE) and 0.952 (AVR)). However, visual grading of AVR showed inter grader differences as well as discrepancies between subjectively derived and objectively calculated AVR (all p < 0.000001). Conclusion: Grader education and experience leads to inter-grader differences but more importantly, subjective grading is not capable to pick up subtle differences across healthy individuals and does not represent true AVR when compared with an objective assessment method. Technology advancements mean we no longer rely on opthalmoscopic evaluation but can capture and store fundus images with retinal cameras, enabling us to measure vessel calibre more accurately compared to visual estimation; hence it should be integrated in optometric practise for improved accuracy and reliability of clinical assessments of retinal vessel calibres. © 2014 Spanish General Council of Optometry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the paper we consider the technology of new domain's ontologies development. We discuss main principles of ontology development, automatic methods of terms extraction from the domain texts and types of ontology relations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose an approach for cost-effective employing of semantic technologies to improve the efficiency of searching and browsing of digital artwork collections. It is based on a semi-automatic creation of a Topic Map-based virtual art gallery portal by using existing Topic Maps tools. Such a ‘cheap’ solution could enable small art museums or art-related educational programs that lack sufficient funding for software development and publication infrastructure to take advantage of the emerging semantic technologies. The proposed approach has been used for creating the WSSU Diggs Gallery Portal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software development is an extremely complex process, during which human errors are introduced and result in faulty software systems. It is highly desirable and important that these errors can be prevented and detected as early as possible. Software architecture design is a high-level system description, which embodies many system features and properties that are eventually implemented in the final operational system. Therefore, methods for modeling and analyzing software architecture descriptions can help prevent and reveal human errors and thus improve software quality. Furthermore, if an analyzed software architecture description can be used to derive a partial software implementation, especially when the derivation can be automated, significant benefits can be gained with regard to both the system quality and productivity. This dissertation proposes a framework for an integrated analysis on both of the design and implementation. To ensure the desirable properties of the architecture model, we apply formal verification by using the model checking technique. To ensure the desirable properties of the implementation, we develop a methodology and the associated tool to translate an architecture specification into an implementation written in the combination of Arch-Java/Java/AspectJ programming languages. The translation is semi-automatic so that many manual programming errors can be prevented. Furthermore, the translation inserting monitoring code into the implementation such that runtime verification can be performed, this provides additional assurance for the quality of the implementation. Moreover, validations for the translations from architecture model to program are provided. Finally, several case studies are experimented and presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern IT infrastructures are constructed by large scale computing systems and administered by IT service providers. Manually maintaining such large computing systems is costly and inefficient. Service providers often seek automatic or semi-automatic methodologies of detecting and resolving system issues to improve their service quality and efficiency. This dissertation investigates several data-driven approaches for assisting service providers in achieving this goal. The detailed problems studied by these approaches can be categorized into the three aspects in the service workflow: 1) preprocessing raw textual system logs to structural events; 2) refining monitoring configurations for eliminating false positives and false negatives; 3) improving the efficiency of system diagnosis on detected alerts. Solving these problems usually requires a huge amount of domain knowledge about the particular computing systems. The approaches investigated by this dissertation are developed based on event mining algorithms, which are able to automatically derive part of that knowledge from the historical system logs, events and tickets. ^ In particular, two textual clustering algorithms are developed for converting raw textual logs into system events. For refining the monitoring configuration, a rule based alert prediction algorithm is proposed for eliminating false alerts (false positives) without losing any real alert and a textual classification method is applied to identify the missing alerts (false negatives) from manual incident tickets. For system diagnosis, this dissertation presents an efficient algorithm for discovering the temporal dependencies between system events with corresponding time lags, which can help the administrators to determine the redundancies of deployed monitoring situations and dependencies of system components. To improve the efficiency of incident ticket resolving, several KNN-based algorithms that recommend relevant historical tickets with resolutions for incoming tickets are investigated. Finally, this dissertation offers a novel algorithm for searching similar textual event segments over large system logs that assists administrators to locate similar system behaviors in the logs. Extensive empirical evaluation on system logs, events and tickets from real IT infrastructures demonstrates the effectiveness and efficiency of the proposed approaches.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From data collected by RV Polarstern, and additional echosoundings provided by national hydrographic offices, research institutions and the International Hydrographic Organization (IHO) Digital Bathymetric Data Center, the 1:1,000,000 Bathymetric Chart of the Weddell Sea (AWl BCWS) series has been developed. The heterogeneity of bathymetric data and the lack of observations within ice-covered areas required the incorporation of supplementary geophysical and geographical information. A semi-automatic procedure was developed for terrain modeling and contouring. In coastal regions, adjacent sub-glacial information was included in order to model the bathymetry of the transition zone along the Antarctic ice edge. Six sheets of the AWl BCWS series in the scale of 1:1,000,000 covering the southern Weddell Sea from 66°S to 78°S and from 68°W to 0°E were recently completed and included in the 1997 GEneral Bathymetric Chart of the Oceans (GEBCO) Digital Atlas CD-ROM (http://www.gebco.net). On the basis of these six 1:1,000,000 AWl BCWS sheets, a generalized 1:3,000,000-scale bathymetric chart was compiled for the entire southern Weddell Sea.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the development of an open-source system for virtual bronchoscopy used in combination with electromagnetic instrument tracking. The end application is virtual navigation of the lung for biopsy of early stage cancer nodules. The open-source platform 3D Slicer was used for creating freely available algorithms for virtual bronchscopy. Firstly, the development of an open-source semi-automatic algorithm for prediction of solitary pulmonary nodule malignancy is presented. This approach may help the physician decide whether to proceed with biopsy of the nodule. The user-selected nodule is segmented in order to extract radiological characteristics (i.e., size, location, edge smoothness, calcification presence, cavity wall thickness) which are combined with patient information to calculate likelihood of malignancy. The overall accuracy of the algorithm is shown to be high compared to independent experts' assessment of malignancy. The algorithm is also compared with two different predictors, and our approach is shown to provide the best overall prediction accuracy. The development of an airway segmentation algorithm which extracts the airway tree from surrounding structures on chest Computed Tomography (CT) images is then described. This represents the first fundamental step toward the creation of a virtual bronchoscopy system. Clinical and ex-vivo images are used to evaluate performance of the algorithm. Different CT scan parameters are investigated and parameters for successful airway segmentation are optimized. Slice thickness is the most affecting parameter, while variation of reconstruction kernel and radiation dose is shown to be less critical. Airway segmentation is used to create a 3D rendered model of the airway tree for virtual navigation. Finally, the first open-source virtual bronchoscopy system was combined with electromagnetic tracking of the bronchoscope for the development of a GPS-like system for navigating within the lungs. Tools for pre-procedural planning and for helping with navigation are provided. Registration between the lungs of the patient and the virtually reconstructed airway tree is achieved using a landmark-based approach. In an attempt to reduce difficulties with registration errors, we also implemented a landmark-free registration method based on a balanced airway survey. In-vitro and in-vivo testing showed good accuracy for this registration approach. The centreline of the 3D airway model is extracted and used to compensate for possible registration errors. Tools are provided to select a target for biopsy on the patient CT image, and pathways from the trachea towards the selected targets are automatically created. The pathways guide the physician during navigation, while distance to target information is updated in real-time and presented to the user. During navigation, video from the bronchoscope is streamed and presented to the physician next to the 3D rendered image. The electromagnetic tracking is implemented with 5 DOF sensing that does not provide roll rotation information. An intensity-based image registration approach is implemented to rotate the virtual image according to the bronchoscope's rotations. The virtual bronchoscopy system is shown to be easy to use and accurate in replicating the clinical setting, as demonstrated in the pre-clinical environment of a breathing lung method. Animal studies were performed to evaluate the overall system performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pour rester compétitives, les entreprises forestières cherchent à contrôler leurs coûts d’approvisionnement. Les abatteuses-façonneuses sont pourvues d’ordinateurs embarqués qui permettent le contrôle et l’automatisation de certaines fonctions. Or, ces technologies ne sont pas couramment utilisées et sont dans le meilleur des cas sous-utilisées. Tandis que l’industrie manifeste un intérêt grandissant pour l’utilisation de ces ordinateurs, peu de travaux de recherche ont porté sur l’apport en productivité et en conformité aux spécifications de façonnage découlant de l’usage de ces systèmes. L’objectif de l’étude était de mesurer les impacts des trois degrés d’automatisation (manuel, semi-automatique et automatique) sur la productivité (m3/hmp) et le taux de conformité des longueurs et des diamètre d’écimage des billes façonnées (%). La collecte de données s’est déroulée dans les secteurs de récolte de Produits forestiers résolu au nord du Lac St-Jean entre les mois de janvier et d’août 2015. Un dispositif en blocs complets a été mis en place pour chacun des cinq opérateurs ayant participé à l’étude. Un seuil de 5 % a été employé pour la réalisation de l’analyse des variances, après la réalisation de contrastes. Un seul cas a présenté un écart significatif de productivité attribuable au changement du degré d’automatisation employé, tandis qu’aucune différence significative n’a été détectée pour la conformité des diamètres d’écimage; des tendances ont toutefois été constatées. Les conformités de longueur obtenues par deux opérateurs ont présenté des écarts significatifs. Ceux-ci opérant sur deux équipements distincts, cela laisse entrevoir l’impact que peut aussi avoir l’opérateur sur le taux de conformité des longueurs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatial-temporal dynamics of zooplankton in the Caravelas river estuary (Bahia, Brazil). The survey was conducted in order to describe the zooplankton community of the estuary Caravelas (Bahia, Brazil), to quantify and relate the patterns of horizontal and vertical transport with the type of tide (neap and spring) and tidal phase (flood and ebb). Zooplankton samples were collected with the aid of a suction pump (300L), filtered in plankton nets (300μm) and fixed in saline formalin 4%. Samples were collected at a fixed point (A1), near the mouth of the estuary, with samples taken at neap tides and spring tides during the dry and rainy seasons. Samples were collected for 13 hours, at intervals of 1 hour in 3 depths: surface, middle and bottom. Simultaneous collection of biological, we measured the current velocity, temperature and salinity of the water through CTD. In the laboratory, samples were selected for analysis in estereomicroscope, with 25 groups identified, with Copepoda getting the highest number of species. The 168 samples obtained from temporal samples were subsampled and processed on equipment ZooScan, with the aid of software ZooProcess at the end were generated 458.997 vingnettes. 8 taxa were identified automatically, with 16 classified as a semi-automatic. The group Copepoda, despite the limited taxonomic refinement ZooScan, obtained 2 genera and 1 species identified automatically. Among the seasons dry and wet groups Brachyura (zoea), Chaetognatha, and the Calanoid copepods (others), Temora spp., Oithona spp. and Euterpina acutifrons were those who had higher frequency of occurrence, appearing in more than 70% of the samples. Copepoda group showed the largest percentage of relative abundance in both seasons. There was no seasonal variation of total zooplankton, with an average density of 7826±4219 org.m-3 in the dry season, and 7959±3675 org.m-3 in the rainy season, neither between the types and phases of the tides, but seasonal differences were significant recorded for the main zooplankton groups. Vertical stratification was seen for the major zooplankton groups (Brachyura, Chaetognatha, Calanoida (other), Oithona spp, Temora spp. e Euterpina acutifrons). The scale of this stratification varied with the type (square or tide) and tidal phase (flood or ebb). The instantaneous transport was more influenced by current velocity, with higher values observed in spring tides to the total zooplankton, however, there was a variation of this pattern depending on the zooplankton group. According to the data import and export of total zooplankton, the outflow of organisms of the estuary was higher than the input. The results suggest that the estuary of Caravelas may influence the dynamics of organic matter to the adjacent coast, with possible consequences in National Marine Park of Abrolhos

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part 19: Knowledge Management in Networks

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatial-temporal dynamics of zooplankton in the Caravelas river estuary (Bahia, Brazil). The survey was conducted in order to describe the zooplankton community of the estuary Caravelas (Bahia, Brazil), to quantify and relate the patterns of horizontal and vertical transport with the type of tide (neap and spring) and tidal phase (flood and ebb). Zooplankton samples were collected with the aid of a suction pump (300L), filtered in plankton nets (300μm) and fixed in saline formalin 4%. Samples were collected at a fixed point (A1), near the mouth of the estuary, with samples taken at neap tides and spring tides during the dry and rainy seasons. Samples were collected for 13 hours, at intervals of 1 hour in 3 depths: surface, middle and bottom. Simultaneous collection of biological, we measured the current velocity, temperature and salinity of the water through CTD. In the laboratory, samples were selected for analysis in estereomicroscope, with 25 groups identified, with Copepoda getting the highest number of species. The 168 samples obtained from temporal samples were subsampled and processed on equipment ZooScan, with the aid of software ZooProcess at the end were generated 458.997 vingnettes. 8 taxa were identified automatically, with 16 classified as a semi-automatic. The group Copepoda, despite the limited taxonomic refinement ZooScan, obtained 2 genera and 1 species identified automatically. Among the seasons dry and wet groups Brachyura (zoea), Chaetognatha, and the Calanoid copepods (others), Temora spp., Oithona spp. and Euterpina acutifrons were those who had higher frequency of occurrence, appearing in more than 70% of the samples. Copepoda group showed the largest percentage of relative abundance in both seasons. There was no seasonal variation of total zooplankton, with an average density of 7826±4219 org.m-3 in the dry season, and 7959±3675 org.m-3 in the rainy season, neither between the types and phases of the tides, but seasonal differences were significant recorded for the main zooplankton groups. Vertical stratification was seen for the major zooplankton groups (Brachyura, Chaetognatha, Calanoida (other), Oithona spp, Temora spp. e Euterpina acutifrons). The scale of this stratification varied with the type (square or tide) and tidal phase (flood or ebb). The instantaneous transport was more influenced by current velocity, with higher values observed in spring tides to the total zooplankton, however, there was a variation of this pattern depending on the zooplankton group. According to the data import and export of total zooplankton, the outflow of organisms of the estuary was higher than the input. The results suggest that the estuary of Caravelas may influence the dynamics of organic matter to the adjacent coast, with possible consequences in National Marine Park of Abrolhos

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La fraction d’éjection du ventricule gauche est un excellent marqueur de la fonction cardiaque. Plusieurs techniques invasives ou non sont utilisées pour son calcul : l’angiographie, l’échocardiographie, la résonnance magnétique nucléaire cardiaque, le scanner cardiaque, la ventriculographie radioisotopique et l’étude de perfusion myocardique en médecine nucléaire. Plus de 40 ans de publications scientifiques encensent la ventriculographie radioisotopique pour sa rapidité d’exécution, sa disponibilité, son faible coût et sa reproductibilité intra-observateur et inter-observateur. La fraction d’éjection du ventricule gauche a été calculée chez 47 patients à deux reprises, par deux technologues, sur deux acquisitions distinctes selon trois méthodes : manuelle, automatique et semi-automatique. Les méthodes automatique et semi-automatique montrent dans l’ensemble une meilleure reproductibilité, une plus petite erreur standard de mesure et une plus petite différence minimale détectable. La méthode manuelle quant à elle fournit un résultat systématiquement et significativement inférieur aux deux autres méthodes. C’est la seule technique qui a montré une différence significative lors de l’analyse intra-observateur. Son erreur standard de mesure est de 40 à 50 % plus importante qu’avec les autres techniques, tout comme l’est sa différence minimale détectable. Bien que les trois méthodes soient d’excellentes techniques reproductibles pour l’évaluation de la fraction d’éjection du ventricule gauche, les estimations de la fiabilité des méthodes automatique et semi-automatique sont supérieures à celles de la méthode manuelle.