964 resultados para Totally Disconnected N-Dimensional Space


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die chronisch obstruktive Lungenerkrankung (engl. chronic obstructive pulmonary disease, COPD) ist ein Überbegriff für Erkrankungen, die zu Husten, Auswurf und Dyspnoe (Atemnot) in Ruhe oder Belastung führen - zu diesen werden die chronische Bronchitis und das Lungenemphysem gezählt. Das Fortschreiten der COPD ist eng verknüpft mit der Zunahme des Volumens der Wände kleiner Luftwege (Bronchien). Die hochauflösende Computertomographie (CT) gilt bei der Untersuchung der Morphologie der Lunge als Goldstandard (beste und zuverlässigste Methode in der Diagnostik). Möchte man Bronchien, eine in Annäherung tubuläre Struktur, in CT-Bildern vermessen, so stellt die geringe Größe der Bronchien im Vergleich zum Auflösungsvermögen eines klinischen Computertomographen ein großes Problem dar. In dieser Arbeit wird gezeigt wie aus konventionellen Röntgenaufnahmen CT-Bilder berechnet werden, wo die mathematischen und physikalischen Fehlerquellen im Bildentstehungsprozess liegen und wie man ein CT-System mittels Interpretation als lineares verschiebungsinvariantes System (engl. linear shift invariant systems, LSI System) mathematisch greifbar macht. Basierend auf der linearen Systemtheorie werden Möglichkeiten zur Beschreibung des Auflösungsvermögens bildgebender Verfahren hergeleitet. Es wird gezeigt wie man den Tracheobronchialbaum aus einem CT-Datensatz stabil segmentiert und mittels eines topologieerhaltenden 3-dimensionalen Skelettierungsalgorithmus in eine Skelettdarstellung und anschließend in einen kreisfreien Graphen überführt. Basierend auf der linearen System Theorie wird eine neue, vielversprechende, integral-basierte Methodik (IBM) zum Vermessen kleiner Strukturen in CT-Bildern vorgestellt. Zum Validieren der IBM-Resultate wurden verschiedene Messungen an einem Phantom, bestehend aus 10 unterschiedlichen Silikon Schläuchen, durchgeführt. Mit Hilfe der Skelett- und Graphendarstellung ist ein Vermessen des kompletten segmentierten Tracheobronchialbaums im 3-dimensionalen Raum möglich. Für 8 zweifach gescannte Schweine konnte eine gute Reproduzierbarkeit der IBM-Resultate nachgewiesen werden. In einer weiteren, mit IBM durchgeführten Studie konnte gezeigt werden, dass die durchschnittliche prozentuale Bronchialwandstärke in CT-Datensätzen von 16 Rauchern signifikant höher ist, als in Datensätzen von 15 Nichtrauchern. IBM läßt sich möglicherweise auch für Wanddickenbestimmungen bei Problemstellungen aus anderen Arbeitsgebieten benutzen - kann zumindest als Ideengeber dienen. Ein Artikel mit der Beschreibung der entwickelten Methodik und der damit erzielten Studienergebnisse wurde zur Publikation im Journal IEEE Transactions on Medical Imaging angenommen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, we study the phenomenology of selected observables in the context of the Randall-Sundrum scenario of a compactified warpedrnextra dimension. Gauge and matter fields are assumed to live in the whole five-dimensional space-time, while the Higgs sector is rnlocalized on the infrared boundary. An effective four-dimensional description is obtained via Kaluza-Klein decomposition of the five dimensionalrnquantum fields. The symmetry breaking effects due to the Higgs sector are treated exactly, and the decomposition of the theory is performedrnin a covariant way. We develop a formalism, which allows for a straight-forward generalization to scenarios with an extended gauge group comparedrnto the Standard Model of elementary particle physics. As an application, we study the so-called custodial Randall-Sundrum model and compare the resultsrnto that of the original formulation. rnWe present predictions for electroweak precision observables, the Higgs production cross section at the LHC, the forward-backward asymmetryrnin top-antitop production at the Tevatron, as well as the width difference, the CP-violating phase, and the semileptonic CP asymmetry in B_s decays.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The spine is a complex structure that provides motion in three directions: flexion and extension, lateral bending and axial rotation. So far, the investigation of the mechanical and kinematic behavior of the basic unit of the spine, a motion segment, is predominantly a domain of in vitro experiments on spinal loading simulators. Most existing approaches to measure spinal stiffness intraoperatively in an in vivo environment use a distractor. However, these concepts usually assume a planar loading and motion. The objective of our study was to develop and validate an apparatus, that allows to perform intraoperative in vivo measurements to determine both the applied force and the resulting motion in three dimensional space. The proposed setup combines force measurement with an instrumented distractor and motion tracking with an optoelectronic system. As the orientation of the applied force and the three dimensional motion is known, not only force-displacement, but also moment-angle relations could be determined. The validation was performed using three cadaveric lumbar ovine spines. The lateral bending stiffness of two motion segments per specimen was determined with the proposed concept and compared with the stiffness acquired on a spinal loading simulator which was considered to be gold standard. The mean values of the stiffness computed with the proposed concept were within a range of ±15% compared to data obtained with the spinal loading simulator under applied loads of less than 5 Nm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software visualizations can provide a concise overview of a complex software system. Unfortunately, as software has no physical shape, there is no `natural' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typically diverges from one visualization to another. We propose an approach to consistent layout for software visualization, called Software Cartography, in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use Latent Semantic Indexing (LSI) to map software artifacts to a vector space, and then use Multidimensional Scaling (MDS) to map this vector space down to two dimensions. The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, as the vocabulary of software artifacts tends to be stable over time. We present a prototype implementation of Software Cartography, and illustrate its use with practical examples from numerous open-source case studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We carry out some computations of vector-valued Siegel modular forms of degree two, weight (k, 2) and level one, and highlight three experimental results: (1) we identify a rational eigenform in a three-dimensional space of cusp forms; (2) we observe that non-cuspidal eigenforms of level one are not always rational; (3) we verify a number of cases of conjectures about congruences between classical modular forms and Siegel modular forms. Our approach is based on Satoh's description of the module of vector-valued Siegel modular forms of weight (k, 2) and an explicit description of the Hecke action on Fourier expansions. (C) 2013 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation concerns convergence analysis for nonparametric problems in the calculus of variations and sufficient conditions for weak local minimizer of a functional for both nonparametric and parametric problems. Newton's method in infinite-dimensional space is proved to be well-defined and converges quadratically to a weak local minimizer of a functional subject to certain boundary conditions. Sufficient conditions for global converges are proposed and a well-defined algorithm based on those conditions is presented and proved to converge. Finite element discretization is employed to achieve an implementable line-search-based quasi-Newton algorithm and a proof of convergence of the discretization of the algorithm is included. This work also proposes sufficient conditions for weak local minimizer without using the language of conjugate points. The form of new conditions is consistent with the ones in finite-dimensional case. It is believed that the new form of sufficient conditions will lead to simpler approaches to verify an extremal as local minimizer for well-known problems in calculus of variations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no “natural“ mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its \emph{vocabulary}, and distance corresponds to similarity of vocabulary. We use Latent Semantic Indexing (LSI) to map software artifacts to a vector space, and then use Multidimensional Scaling (MDS) to map this vector space down to two dimensions. The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The anatomy of the human brain is organized as a complex arrangement of interrelated structures in three dimensional space. To facilitate the understanding of both structure and function, we have created a volume rendered brain atlas (VRBA) with an intuitive interface that allows real-time stereoscopic rendering of brain anatomy. The VRBA incorporates 2-dimensional and 3-dimensional texture mapping to display segmented brain anatomy co-registered with a T1 MRI. The interface allows the user to remove and add any of the 62 brain structures, as well as control the display of the MRI dataset. The atlas also contains brief verbal and written descriptions of the different anatomical regions to correlate structure with function. A variety of stereoscopic projection methods are supported by the VRBA and provide an abstract, yet simple, way of visualizing brain anatomy and 3-dimensional relationships between different nuclei.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the first-order corrected dynamics of fluid branes carrying higher-form charge by obtaining the general form of their equations of motion to pole-dipole order in the absence of external forces. Assuming linear response theory, we characterize the corresponding effective theory of stationary bent charged (an)isotropic fluid branes in terms of two sets of response coefficients, the Young modulus and the piezoelectric moduli. We subsequently find large classes of examples in gravity of this effective theory, by constructing stationary strained charged black brane solutions to first order in a derivative expansion. Using solution generating techniques and bent neutral black branes as a seed solution, we obtain a class of charged black brane geometries carrying smeared Maxwell charge in Einstein-Maxwell-dilaton gravity. In the specific case of ten-dimensional space-time we furthermore use T-duality to generate bent black branes with higher-form charge, including smeared D-branes of type II string theory. By subsequently measuring the bending moment and the electric dipole moment which these geometries acquire due to the strain, we uncover that their form is captured by classical electroelasticity theory. In particular, we find that the Young modulus and the piezoelectric moduli of our strained charged black brane solutions are parameterized by a total of 4 response coefficients, both for the isotropic as well as anisotropic cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An autonomous energy source within a human body is of key importance in the development of medical implants. This work deals with the modelling and the validation of an energy harvesting device which converts the myocardial contractions into electrical energy. The mechanism consists of a clockwork from a commercially available wrist watch. We developed a physical model which is able to predict the total amount of energy generated when applying an external excitation. For the validation of the model, a custom-made hexapod robot was used to accelerate the harvesting device along a given trajectory. We applied forward kinematics to determine the actual motion experienced by the harvesting device. The motion provides translational as well as rotational motion information for accurate simulations in three-dimensional space. The physical model could be successfully validated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En 1905, aparecen en la revista "Annalen der physik" tres artículos que revolucionarán las ciencias físicas y pondrán en jaque los asentados conceptos newtonianos de Espacio y Tiempo. La formulación de la Teoría de la Relatividad por Albert Einstein pone en crisis el valor absoluto de estos conceptos, y permite proponer nuevas reflexiones a propósito de su concepción dentro del campo de la física. Esta revolución ¿podría extrapolarse al campo de la arquitectura, donde Espacio y Tiempo tienen un papel protagonista? Hay que entender la complejidad del hecho arquitectónico y las innumerables variables que participan de su definición. Se estudia en esta Tesis Doctoral un aspecto muy concreto: cómo un paradigma (la Teoría de la Relatividad) puede intervenir y modificar, o no, la Arquitectura. Se plantea para ello ir al origen; desentrañar el momento de interacción entre la Teoría de la Relatividad y la Teoría de la Arquitectura, que permita determinar si aquella influyó sobre ésta en los escritos teóricos de las vanguardias aplicados a la Arquitectura. “Después de Einstein. Una arquitectura para una teoría” buscará los puntos de conexión de la Teoría de la Relatividad con la teoría arquitectónica de las vanguardias de principio del siglo XX, su influencia, la contaminación entre una y otra, con posibles resultados arquitectónicos a partir de esta interacción, capaz de definir nuevos argumentos formales para un nuevo lenguaje enArquitectura. Annalen der physik Después de Einstein. Una arquitectura para una teoría Para ello la Tesis se estructura en cuatro capítulos. El primero expone el ámbito geográfico y cronológico donde se desarrolla la Teoría de la Relatividad con la repercusión teórica que tiene para el arte, en función de una nueva definición de espacio vinculado al tiempo, como evento que se desarrolla en un ámbito cuatridimensional; la indeterminación de las medidas de espacio y de las medidas de tiempo, y la importancia de entender la materia como energía. El segundo capítulo estudia los movimientos de vanguardia coetáneos a la eclosión de la Relatividad, enmarcados en su ámbito geográfico más próximo. El cubismo se muestra como movimiento que participa ocasionalmente de las matemáticas y la geometría, bajo el influjo del científico Henri Poincaré y las geometrías no euclidianas. El futurismo indaga en los avances de la ciencia desde una cierta lejanía, cierta falta de rigor o profundidad científica para extraer las leyes de su nuevo idealismo plástico constructivo, definiendo e interpretando su Universo a partir de los avances de la ciencia, en respuesta a la crisis del espacio y del tiempo newtonianos. El lenguaje científico se encuentra presente en conceptos como "simultaneidad" (Boccioni), "expansión esférica de la luz en el espacio" (Severini y Carrá), "cuatridimensionalidad", "espacio-tiempo", "aire-luz-fuerza", "materia y energía" que paralelamente conforman el cuerpo operacional de la teoría de Einstein. Si bien no es posible atribuir a la Teoría de la Relatividad un papel protagonista como referente para el pensamiento artístico, en 1936, con la aparición del manifiesto Dimensionista, se atribuyen explícitamente a las teorías de Einstein las nuevas ideas de espacio-tiempo del espíritu europeo seguido por cubistas y futuristas. El tercer capítulo describe cómo la Teoría de la Relatividad llegó a ser fuente de inspiración para la Teoría de la Arquitectura. Estructurado en tres subcapítulos, se estudia el autor principal que aportó para la Arquitectura conceptos e ideas extrapoladas de la Teoría de la Relatividad después de su estudio e interpretación (Van Doesburg), dónde se produjeron las influencias y puntos de contacto (Lissitzky, Eggeling, Moholy-Nagy) y cómo fueron difundidas a través de la arquitectura (Einsteinturm de Mendelsohn) y de las revistas especializadas. El cuarto capítulo extrae las conclusiones del estudio realizado en esta Tesis, que bien pudiera resumir MoholyNagy en su texto "Vision inmotion" (1946) al comentar: "Ya que el "espacio-tiempo" puede ser un término engañoso, tiene que hacerse especialmente hincapié en que los problemas de espacio-tiempo en el arte no están necesariamente basados en la Teoría de la Relatividad de Einstein. Esto no tiene intención de descartar la relevancia de su teoría para las artes. Pero los artistas y los laicos rara vez tienen el conocimiento matemático para visualizar en fórmulas científicas las analogías con su propio trabajo. La terminología de Einstein del "espacio-tiempo" y la "relatividad" ha sido absorbida por nuestro lenguaje diario." ABSTRACT. "AFTER EINSTEIN:ANARCHITECTUREFORATHEORY." In 1905, three articles were published in the journal "Annalen der Physik ". They revolutionized physical sciences and threw into crisis the newtonian concepts of Space and Time. The formulation of the Theory of Relativity by Albert Einstein put a strain on the absolute value of these concepts, and proposed new reflections about them in the field of Physics. Could this revolution be extrapolated to the field of Architecture, where Space and Time have a main role? It is necessary to understand the complexity of architecture and the countless variables involved in its definition. For this reason, in this PhD. Thesis, we study a specific aspect: how a paradigm (Theory of Relativity) can intervene and modify -or not- Architecture. It is proposed to go back to the origin; to unravel the moment in which the interaction between the Theory of Relativity and the Theory of Architecture takes place, to determine whether the Theory of Relativity influenced on the theoretical avant-garde writings applied to Architecture. "After Einstein.An architecture for a theory " will search the connection points between the Theory of Relativity and architectural avant-garde theory of the early twentieth century, the influence and contamination between them, giving rise to new architectures that define new formal arguments for a new architectural language. Annalen der Physik This thesis is divided into four chapters. The first one describes the geographical and chronological scope in which the Theory of Relativity is developed showing its theoretical implications in the field of art, according to a new definition of Space linked to Time, as an event that takes place in a fourdimensional space; the indetermination of the measurement of space and time, and the importance of understanding "matter" as "energy". The second chapter examines the avant-garde movements contemporary to the theory of relativity. Cubism is shown as an artist movement that occasionally participates in mathematics and geometry, under the influence of Henri Poincaré and non-Euclidean geometries. Futurism explores the advances of science at a certain distance, with lack of scientific rigor to extract the laws of their new plastic constructive idealism. Scientific language is present in concepts like "simultaneity" (Boccioni), "expanding light in space" (Severini and Carra), "four-dimensional space", "space-time", "light-air-force," "matter and energy" similar to the operational concepts of Einstein´s theory. While it is not possible to attribute a leading role to the Theory of Relativity, as a benchmark for artistic laws, in 1936, with the publication of the Dimensionist manifest, the new ideas of space-time followed by cubist and futurist were attributed to the Einstein's theory. The third chapter describes how the Theory of Relativity became an inspiration for the architectural theory. Structured into three subsections, we study the main author who studied the theory of relativity and ,as a consequence, contributed with some concepts and ideas to the theory of architecture (Van Doesburg), where influences and contact points took place (Lissitzky, Eggeling, Moholy-Nagy) and how were disseminated throughArchitecture (Einsteinturm, by Mendelsohn) and journals. The fourth chapter draws the conclusions of this PhD. Thesis, which could be well summarized by Moholy Nagy in his text "Vision in Motion" (1946): vi Since "space-time" can be a misleading term, it especially has to be emphasized that the space-time problems in the arts are not necessarily based upon Einstein´s Theory of Relativity. This is not meant to discount the relevance of his theory to the arts. But artists and laymen seldom have the mathematical knowledge to visualize in scientific formulae the analogies to their own work. Einstein's terminology of "space-time" and "relativity" has been absorbed by our daily language.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The origins for this work arise in response to the increasing need for biologists and doctors to obtain tools for visual analysis of data. When dealing with multidimensional data, such as medical data, the traditional data mining techniques can be a tedious and complex task, even to some medical experts. Therefore, it is necessary to develop useful visualization techniques that can complement the expert’s criterion, and at the same time visually stimulate and make easier the process of obtaining knowledge from a dataset. Thus, the process of interpretation and understanding of the data can be greatly enriched. Multidimensionality is inherent to any medical data, requiring a time-consuming effort to get a clinical useful outcome. Unfortunately, both clinicians and biologists are not trained in managing more than four dimensions. Specifically, we were aimed to design a 3D visual interface for gene profile analysis easy in order to be used both by medical and biologist experts. In this way, a new analysis method is proposed: MedVir. This is a simple and intuitive analysis mechanism based on the visualization of any multidimensional medical data in a three dimensional space that allows interaction with experts in order to collaborate and enrich this representation. In other words, MedVir makes a powerful reduction in data dimensionality in order to represent the original information into a three dimensional environment. The experts can interact with the data and draw conclusions in a visual and quickly way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Territory or zone design processes entail partitioning a geographic space, organized as a set of areal units, into different regions or zones according to a specific set of criteria that are dependent on the application context. In most cases, the aim is to create zones of approximately equal sizes (zones with equal numbers of inhabitants, same average sales, etc.). However, some of the new applications that have emerged, particularly in the context of sustainable development policies, are aimed at defining zones of a predetermined, though not necessarily similar, size. In addition, the zones should be built around a given set of seeds. This type of partitioning has not been sufficiently researched; therefore, there are no known approaches for automated zone delimitation. This study proposes a new method based on a discrete version of the adaptive additively weighted Voronoi diagram that makes it possible to partition a two-dimensional space into zones of specific sizes, taking both the position and the weight of each seed into account. The method consists of repeatedly solving a traditional additively weighted Voronoi diagram, so that each seed?s weight is updated at every iteration. The zones are geographically connected using a metric based on the shortest path. Tests conducted on the extensive farming system of three municipalities in Castile-La Mancha (Spain) have established that the proposed heuristic procedure is valid for solving this type of partitioning problem. Nevertheless, these tests confirmed that the given seed position determines the spatial configuration the method must solve and this may have a great impact on the resulting partition.