913 resultados para Video interaction analysis : methods and methodology


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some fundamental biological processes such as embryonic development have been preserved during evolution and are common to species belonging to different phylogenetic positions, but are nowadays largely unknown. The understanding of cell morphodynamics leading to the formation of organized spatial distribution of cells such as tissues and organs can be achieved through the reconstruction of cells shape and position during the development of a live animal embryo. We design in this work a chain of image processing methods to automatically segment and track cells nuclei and membranes during the development of a zebrafish embryo, which has been largely validates as model organism to understand vertebrate development, gene function and healingrepair mechanisms in vertebrates. The embryo is previously labeled through the ubiquitous expression of fluorescent proteins addressed to cells nuclei and membranes, and temporal sequences of volumetric images are acquired with laser scanning microscopy. Cells position is detected by processing nuclei images either through the generalized form of the Hough transform or identifying nuclei position with local maxima after a smoothing preprocessing step. Membranes and nuclei shapes are reconstructed by using PDEs based variational techniques such as the Subjective Surfaces and the Chan Vese method. Cells tracking is performed by combining informations previously detected on cells shape and position with biological regularization constraints. Our results are manually validated and reconstruct the formation of zebrafish brain at 7-8 somite stage with all the cells tracked starting from late sphere stage with less than 2% error for at least 6 hours. Our reconstruction opens the way to a systematic investigation of cellular behaviors, of clonal origin and clonal complexity of brain organs, as well as the contribution of cell proliferation modes and cell movements to the formation of local patterns and morphogenetic fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of the present research is to define a Semantic Web framework for precedent modelling, by using knowledge extracted from text, metadata, and rules, while maintaining a strong text-to-knowledge morphism between legal text and legal concepts, in order to fill the gap between legal document and its semantics. The framework is composed of four different models that make use of standard languages from the Semantic Web stack of technologies: a document metadata structure, modelling the main parts of a judgement, and creating a bridge between a text and its semantic annotations of legal concepts; a legal core ontology, modelling abstract legal concepts and institutions contained in a rule of law; a legal domain ontology, modelling the main legal concepts in a specific domain concerned by case-law; an argumentation system, modelling the structure of argumentation. The input to the framework includes metadata associated with judicial concepts, and an ontology library representing the structure of case-law. The research relies on the previous efforts of the community in the field of legal knowledge representation and rule interchange for applications in the legal domain, in order to apply the theory to a set of real legal documents, stressing the OWL axioms definitions as much as possible in order to enable them to provide a semantically powerful representation of the legal document and a solid ground for an argumentation system using a defeasible subset of predicate logics. It appears that some new features of OWL2 unlock useful reasoning features for legal knowledge, especially if combined with defeasible rules and argumentation schemes. The main task is thus to formalize legal concepts and argumentation patterns contained in a judgement, with the following requirement: to check, validate and reuse the discourse of a judge - and the argumentation he produces - as expressed by the judicial text.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diese Dissertation demonstriert und verbessert die Vorhersagekraft der Coupled-Cluster-Theorie im Hinblick auf die hochgenaue Berechnung von Moleküleigenschaften. Die Demonstration erfolgt mittels Extrapolations- und Additivitätstechniken in der Single-Referenz-Coupled-Cluster-Theorie, mit deren Hilfe die Existenz und Struktur von bisher unbekannten Molekülen mit schweren Hauptgruppenelementen vorhergesagt wird. Vor allem am Beispiel von cyclischem SiS_2, einem dreiatomigen Molekül mit 16 Valenzelektronen, wird deutlich, dass die Vorhersagekraft der Theorie sich heutzutage auf Augenhöhe mit dem Experiment befindet: Theoretische Überlegungen initiierten eine experimentelle Suche nach diesem Molekül, was schließlich zu dessen Detektion und Charakterisierung mittels Rotationsspektroskopie führte. Die Vorhersagekraft der Coupled-Cluster-Theorie wird verbessert, indem eine Multireferenz-Coupled-Cluster-Methode für die Berechnung von Spin-Bahn-Aufspaltungen erster Ordnung in 2^Pi-Zuständen entwickelt wird. Der Fokus hierbei liegt auf Mukherjee's Variante der Multireferenz-Coupled-Cluster-Theorie, aber prinzipiell ist das vorgeschlagene Berechnungsschema auf alle Varianten anwendbar. Die erwünschte Genauigkeit beträgt 10 cm^-1. Sie wird mit der neuen Methode erreicht, wenn Ein- und Zweielektroneneffekte und bei schweren Elementen auch skalarrelativistische Effekte berücksichtigt werden. Die Methode eignet sich daher in Kombination mit Coupled-Cluster-basierten Extrapolations-und Additivitätsschemata dafür, hochgenaue thermochemische Daten zu berechnen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The exchange of chemical constituents between ocean and atmosphere provides potentially important feedback mechanisms in the climate system. The aim of this study is to develop and evaluate a chemically coupled global atmosphere-ocean model. For this, an atmosphere-ocean general circulation model with atmospheric chemistry has been expanded to include oceanic biogeochemistry and the process of air-sea gas exchange. The calculation of seawater concentrations in the oceanic biogeochemistry submodel has been expanded from DMS, CO₂

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In un mondo che richiede sempre maggiormente un'automazione delle attività della catena produttiva industriale, la computer vision rappresenta uno strumento fondamentale perciò che viene già riconosciuta internazionalmente come la Quarta Rivoluzione Industriale o Industry 4.0. Avvalendomi di questo strumento ho intrapreso presso l'azienda Syngenta lo studio della problematica della conta automatica del numero di foglie di una pianta. Il problema è stato affrontato utilizzando due differenti approcci, ispirandosi alla letteratura. All'interno dell'elaborato è presente anche la descrizione progettuale di un ulteriore metodo, ad oggi non presente in letteratura. Le metodologie saranno spiegate in dettaglio ed i risultati ottenuti saranno confrontati utilizzando i primi due approcci. Nel capitolo finale si trarranno le conclusioni sulle basi dei risultati ottenuti e dall'analisi degli stessi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To compare the haemostatic effect and tissue reactions of different agents and methods used for haemorrhage control in apical surgery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Host determinants of HIV-1 viral tropism include factors from producer cells that affect the efficiency of productive infection and factors in target cells that block infection after viral entry. TRIM5 restricts HIV-1 infection at an early post-entry step through a mechanism associated with rapid disassembly of the retroviral capsid. Topoisomerase I (TOP1) appears to play a role in HIV-1 viral tropism by incorporating into or otherwise modulating virions affecting the efficiency of a post-entry step, as the expression of human TOP1 in African Green Monkey (AGM) virion-producing cells increased the infectivity of progeny virions by five-fold. This infectivity enhancement required human TOP1 residues 236 and 237 as their replacement with the AGM counterpart residues abolished the infectivity enhancement. Our previous studies showed that TOP1 interacts with BTBD1 and BTBD2, two proteins which co-localize with the TRIM5 splice variant TRIM5 in cytoplasmic bodies. Because BTBD1 and BTBD2 interact with one HIV-1 viral tropism factor, TOP1, and co-localize with a splice variant of another, we investigated the potential involvement of BTBD1 and BTBD2 in HIV-1 restriction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study evaluated the operator variability of different finishing and polishing techniques. After placing 120 composite restorations (Tetric EvoCeram) in plexiglassmolds, the surface of the specimens was roughened in a standardized manner. Twelve operators with different experience levels polished the specimens using the following finishing/polishing procedures: method 1 (40 ?m diamond [40D], 15 ?m diamond [15D], 42 ?m silicon carbide polisher [42S], 6 ?m silicon carbide polisher [6S] and Occlubrush [O]); method 2 (40D, 42S, 6S and O); method 3 (40D, 42S, 6S and PoGo); method 4 (40D, 42S and PoGo) and method 5 (40D, 42S and O). The mean surface roughness (Ra) was measured with a profilometer. Differences between the methods were analyzed with non-parametric ANOVA and pairwise Wilcoxon signed rank tests (?=0.05). All the restorations were qualitatively assessed using SEM. Methods 3 and 4 showed the best polishing results and method 5 demonstrated the poorest. Method 5 was also most dependent on the skills of the operator. Except for method 5, all of the tested procedures reached a clinically acceptable surface polish of Ra?0.2 ?m. Polishing procedures can be simplified without increasing variability between operators and without jeopardizing polishing results.