970 resultados para geometric arrays


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN]The re-identification problem has been commonly accomplished using appearance features based on salient points and color information. In this paper, we focus on the possibilities that simple geometric features obtained from depth images captured with RGB-D cameras may offer for the task, particularly working under severe illumination conditions. The results achieved for different sets of simple geometric features extracted in a top-view setup seem to provide useful descriptors for the re-identification task, which can be integrated in an ambient intelligent environment as part of a sensor network.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Array seismology is an useful tool to perform a detailed investigation of the Earth’s interior. Seismic arrays by using the coherence properties of the wavefield are able to extract directivity information and to increase the ratio of the coherent signal amplitude relative to the amplitude of incoherent noise. The Double Beam Method (DBM), developed by Krüger et al. (1993, 1996), is one of the possible applications to perform a refined seismic investigation of the crust and mantle by using seismic arrays. The DBM is based on a combination of source and receiver arrays leading to a further improvement of the signal-to-noise ratio by reducing the error in the location of coherent phases. Previous DBM works have been performed for mantle and core/mantle resolution (Krüger et al., 1993; Scherbaum et al., 1997; Krüger et al., 2001). An implementation of the DBM has been presented at 2D large-scale (Italian data-set for Mw=9.3, Sumatra earthquake) and at 3D crustal-scale as proposed by Rietbrock & Scherbaum (1999), by applying the revised version of Source Scanning Algorithm (SSA; Kao & Shan, 2004). In the 2D application, the rupture front propagation in time has been computed. In 3D application, the study area (20x20x33 km3), the data-set and the source-receiver configurations are related to the KTB-1994 seismic experiment (Jost et al., 1998). We used 60 short-period seismic stations (200-Hz sampling rate, 1-Hz sensors) arranged in 9 small arrays deployed in 2 concentric rings about 1 km (A-arrays) and 5 km (B-array) radius. The coherence values of the scattering points have been computed in the crustal volume, for a finite time-window along all array stations given the hypothesized origin time and source location. The resulting images can be seen as a (relative) joint log-likelihood of any point in the subsurface that have contributed to the full set of observed seismograms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Non-Equilibrium Statistical Mechanics is a broad subject. Grossly speaking, it deals with systems which have not yet relaxed to an equilibrium state, or else with systems which are in a steady non-equilibrium state, or with more general situations. They are characterized by external forcing and internal fluxes, resulting in a net production of entropy which quantifies dissipation and the extent by which, by the Second Law of Thermodynamics, time-reversal invariance is broken. In this thesis we discuss some of the mathematical structures involved with generic discrete-state-space non-equilibrium systems, that we depict with networks in all analogous to electrical networks. We define suitable observables and derive their linear regime relationships, we discuss a duality between external and internal observables that reverses the role of the system and of the environment, we show that network observables serve as constraints for a derivation of the minimum entropy production principle. We dwell on deep combinatorial aspects regarding linear response determinants, which are related to spanning tree polynomials in graph theory, and we give a geometrical interpretation of observables in terms of Wilson loops of a connection and gauge degrees of freedom. We specialize the formalism to continuous-time Markov chains, we give a physical interpretation for observables in terms of locally detailed balanced rates, we prove many variants of the fluctuation theorem, and show that a well-known expression for the entropy production due to Schnakenberg descends from considerations of gauge invariance, where the gauge symmetry is related to the freedom in the choice of a prior probability distribution. As an additional topic of geometrical flavor related to continuous-time Markov chains, we discuss the Fisher-Rao geometry of nonequilibrium decay modes, showing that the Fisher matrix contains information about many aspects of non-equilibrium behavior, including non-equilibrium phase transitions and superposition of modes. We establish a sort of statistical equivalence principle and discuss the behavior of the Fisher matrix under time-reversal. To conclude, we propose that geometry and combinatorics might greatly increase our understanding of nonequilibrium phenomena.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel design based on electric field-free open microwell arrays for the automated continuous-flow sorting of single or small clusters of cells is presented. The main feature of the proposed device is the parallel analysis of cell-cell and cell-particle interactions in each microwell of the array. High throughput sample recovery with a fast and separate transfer from the microsites to standard microtiter plates is also possible thanks to the flexible printed circuit board technology which permits to produce cost effective large area arrays featuring geometries compatible with laboratory equipment. The particle isolation is performed via negative dielectrophoretic forces which convey the particles’ into the microwells. Particles such as cells and beads flow in electrically active microchannels on whose substrate the electrodes are patterned. The introduction of particles within the microwells is automatically performed by generating the required feedback signal by a microscope-based optical counting and detection routine. In order to isolate a controlled number of particles we created two particular configurations of the electric field within the structure. The first one permits their isolation whereas the second one creates a net force which repels the particles from the microwell entrance. To increase the parallelism at which the cell-isolation function is implemented, a new technique based on coplanar electrodes to detect particle presence was implemented. A lock-in amplifying scheme was used to monitor the impedance of the channel perturbed by flowing particles in high-conductivity suspension mediums. The impedance measurement module was also combined with the dielectrophoretic focusing stage situated upstream of the measurement stage, to limit the measured signal amplitude dispersion due to the particles position variation within the microchannel. In conclusion, the designed system complies with the initial specifications making it suitable for cellomics and biotechnology applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Carbon has a unique ability to shape networks of differently hybridized atoms that can generate various allotropes and may also exist as nanoscale materials. The emergence of carbon nanostructures initially occured through the serendipitous discovery of fullerenes and then through experimental advances which led to carbon nanotubes, nanohorns and graphene. The structural diversity of carbon nanoscopic allotropes and their unique and unprecedentend properties, give rise to countless applications and have been intensively exploited in nanotechnology, since they may address the need to create smarter optoelectronic devices, smaller in size and with better performance. The versatile properties of carbon nanomaterials are reflected in the multidisciplinary character of my doctoral research where, in particular, I take advantage of the opportunities offered by fullerenes and carbon nanotubes in constructing novel functional materials. In this work, carbon nanostructures are incorporated in novel photoactive functional systems constructed through different types of interactions – covalent bonds, ion-pairing or self-assembly. The variety of properties exhibited by carbon nanostructures is successfully explored by assigning them a different role in a specific array: fullerenes are employed as electron or energy acceptors, whereas carbon nanotubes behave like optically inert scaffolds for luminescent materials or nanoscale substrates in sonication-induced self-assembly. All the presented systems serve as a testbed for exploring the properties of carbon nanostructures in multicomponent arrays, which may be advantageous for the production of new photovoltaic or optoelectronic devices, as well as in the design and control of self-assembly processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Massive parallel robots (MPRs) driven by discrete actuators are force regulated robots that undergo continuous motions despite being commanded through a finite number of states only. Designing a real-time control of such systems requires fast and efficient methods for solving their inverse static analysis (ISA), which is a challenging problem and the subject of this thesis. In particular, five Artificial intelligence methods are proposed to investigate the on-line computation and the generalization error of ISA problem of a class of MPRs featuring three-state force actuators and one degree of revolute motion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability of block copolymers to spontaneously self-assemble into a variety of ordered nano-structures not only makes them a scientifically interesting system for the investigation of order-disorder phase transitions, but also offers a wide range of nano-technological applications. The architecture of a diblock is the most simple among the block copolymer systems, hence it is often used as a model system in both experiment and theory. We introduce a new soft-tetramer model for efficient computer simulations of diblock copolymer melts. The instantaneous non-spherical shape of polymer chains in molten state is incorporated by modeling each of the two blocks as two soft spheres. The interactions between the spheres are modeled in a way that the diblock melt tends to microphase separate with decreasing temperature. Using Monte Carlo simulations, we determine the equilibrium structures at variable values of the two relevant control parameters, the diblock composition and the incompatibility of unlike components. The simplicity of the model allows us to scan the control parameter space in a completeness that has not been reached in previous molecular simulations.The resulting phase diagram shows clear similarities with the phase diagram found in experiments. Moreover, we show that structural details of block copolymer chains can be reproduced by our simple model.We develop a novel method for the identification of the observed diblock copolymer mesophases that formalizes the usual approach of direct visual observation,using the characteristic geometry of the structures. A cluster analysis algorithm is used to determine clusters of each component of the diblock, and the number and shape of the clusters can be used to determine the mesophase.We also employ methods from integral geometry for the identification of mesophases and compare their usefulness to the cluster analysis approach.To probe the properties of our model in confinement, we perform molecular dynamics simulations of atomistic polyethylene melts confined between graphite surfaces. The results from these simulations are used as an input for an iterative coarse-graining procedure that yields a surface interaction potential for the soft-tetramer model. Using the interaction potential derived in that way, we perform an initial study on the behavior of the soft-tetramer model in confinement. Comparing with experimental studies, we find that our model can reflect basic features of confined diblock copolymer melts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Every year, thousand of surgical treatments are performed in order to fix up or completely substitute, where possible, organs or tissues affected by degenerative diseases. Patients with these kind of illnesses stay long times waiting for a donor that could replace, in a short time, the damaged organ or the tissue. The lack of biological alternates, related to conventional surgical treatments as autografts, allografts, e xenografts, led the researchers belonging to different areas to collaborate to find out innovative solutions. This research brought to a new discipline able to merge molecular biology, biomaterial, engineering, biomechanics and, recently, design and architecture knowledges. This discipline is named Tissue Engineering (TE) and it represents a step forward towards the substitutive or regenerative medicine. One of the major challenge of the TE is to design and develop, using a biomimetic approach, an artificial 3D anatomy scaffold, suitable for cells adhesion that are able to proliferate and differentiate themselves as consequence of the biological and biophysical stimulus offered by the specific tissue to be replaced. Nowadays, powerful instruments allow to perform analysis day by day more accurateand defined on patients that need more precise diagnosis and treatments.Starting from patient specific information provided by TC (Computed Tomography) microCT and MRI(Magnetic Resonance Imaging), an image-based approach can be performed in order to reconstruct the site to be replaced. With the aid of the recent Additive Manufacturing techniques that allow to print tridimensional objects with sub millimetric precision, it is now possible to practice an almost complete control of the parametrical characteristics of the scaffold: this is the way to achieve a correct cellular regeneration. In this work, we focalize the attention on a branch of TE known as Bone TE, whose the bone is main subject. Bone TE combines osteoconductive and morphological aspects of the scaffold, whose main properties are pore diameter, structure porosity and interconnectivity. The realization of the ideal values of these parameters represents the main goal of this work: here we'll a create simple and interactive biomimetic design process based on 3D CAD modeling and generative algorithmsthat provide a way to control the main properties and to create a structure morphologically similar to the cancellous bone. Two different typologies of scaffold will be compared: the first is based on Triply Periodic MinimalSurface (T.P.M.S.) whose basic crystalline geometries are nowadays used for Bone TE scaffolding; the second is based on using Voronoi's diagrams and they are more often used in the design of decorations and jewellery for their capacity to decompose and tasselate a volumetric space using an heterogeneous spatial distribution (often frequent in nature). In this work, we will show how to manipulate the main properties (pore diameter, structure porosity and interconnectivity) of the design TE oriented scaffolding using the implementation of generative algorithms: "bringing back the nature to the nature".

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To propose the determination of the macromolecular baseline (MMBL) in clinical 1H MR spectra based on T(1) and T(2) differentiation using 2D fitting in FiTAID, a general Fitting Tool for Arrays of Interrelated Datasets.