405 resultados para Hamming Cube
Resumo:
[EN]In this paper we review the novel meccano method. We summarize the main stages (subdivision, mapping, optimization) of this automatic tetrahedral mesh generation technique and we concentrate the study to complex genus-zero solids. In this case, our procedure only requires a surface triangulation of the solid. A crucial consequence of our method is the volume parametrization of the solid to a cube. We construct volume T-meshes for isogeometric analysis by using this result. The efficiency of the proposed technique is shown with several examples. A comparison between the meccano method and standard mesh generation techniques is introduced.-1…
Resumo:
[EN]This work introduces a new technique for tetrahedral mesh optimization. The procedure relocates boundary and inner nodes without changing the mesh topology. In order to maintain the boundary approximation while boundary nodes are moved, a local refinement of tetrahedra with faces on the solid boundary is necessary in some cases. New nodes are projected on the boundary by using a surface parameterization. In this work, the proposed method is applied to tetrahedral meshes of genus-zero solids that are generated by the meccano method. In this case, the solid boundary is automatically decomposed into six surface patches which are parameterized into the six faces of a cube with the Floater parameterization...
Resumo:
Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition. Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime. Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off. Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device. Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques.
Resumo:
The purpose of this Thesis is to develop a robust and powerful method to classify galaxies from large surveys, in order to establish and confirm the connections between the principal observational parameters of the galaxies (spectral features, colours, morphological indices), and help unveil the evolution of these parameters from $z \sim 1$ to the local Universe. Within the framework of zCOSMOS-bright survey, and making use of its large database of objects ($\sim 10\,000$ galaxies in the redshift range $0 < z \lesssim 1.2$) and its great reliability in redshift and spectral properties determinations, first we adopt and extend the \emph{classification cube method}, as developed by Mignoli et al. (2009), to exploit the bimodal properties of galaxies (spectral, photometric and morphologic) separately, and then combining together these three subclassifications. We use this classification method as a test for a newly devised statistical classification, based on Principal Component Analysis and Unsupervised Fuzzy Partition clustering method (PCA+UFP), which is able to define the galaxy population exploiting their natural global bimodality, considering simultaneously up to 8 different properties. The PCA+UFP analysis is a very powerful and robust tool to probe the nature and the evolution of galaxies in a survey. It allows to define with less uncertainties the classification of galaxies, adding the flexibility to be adapted to different parameters: being a fuzzy classification it avoids the problems due to a hard classification, such as the classification cube presented in the first part of the article. The PCA+UFP method can be easily applied to different datasets: it does not rely on the nature of the data and for this reason it can be successfully employed with others observables (magnitudes, colours) or derived properties (masses, luminosities, SFRs, etc.). The agreement between the two classification cluster definitions is very high. ``Early'' and ``late'' type galaxies are well defined by the spectral, photometric and morphological properties, both considering them in a separate way and then combining the classifications (classification cube) and treating them as a whole (PCA+UFP cluster analysis). Differences arise in the definition of outliers: the classification cube is much more sensitive to single measurement errors or misclassifications in one property than the PCA+UFP cluster analysis, in which errors are ``averaged out'' during the process. This method allowed us to behold the \emph{downsizing} effect taking place in the PC spaces: the migration between the blue cloud towards the red clump happens at higher redshifts for galaxies of larger mass. The determination of $M_{\mathrm{cross}}$ the transition mass is in significant agreement with others values in literature.
Resumo:
This thesis regards the Wireless Sensor Network (WSN), as one of the most important technologies for the twenty-first century and the implementation of different packet correcting erasure codes to cope with the ”bursty” nature of the transmission channel and the possibility of packet losses during the transmission. The limited battery capacity of each sensor node makes the minimization of the power consumption one of the primary concerns in WSN. Considering also the fact that in each sensor node the communication is considerably more expensive than computation, this motivates the core idea to invest computation within the network whenever possible to safe on communication costs. The goal of the research was to evaluate a parameter, for example the Packet Erasure Ratio (PER), that permit to verify the functionality and the behavior of the created network, validate the theoretical expectations and evaluate the convenience of introducing the recovery packet techniques using different types of packet erasure codes in different types of networks. Thus, considering all the constrains of energy consumption in WSN, the topic of this thesis is to try to minimize it by introducing encoding/decoding algorithms in the transmission chain in order to prevent the retransmission of the erased packets through the Packet Erasure Channel and save the energy used for each retransmitted packet. In this way it is possible extend the lifetime of entire network.
Resumo:
La tesi si propone di investigare, mediante un approccio puramente quantitativo, il contenuto informativo e la morfologia della lingua del manoscritto di Voynich (VMS), noto per essere redatto in un alfabeto sconosciuto e tuttora non decodificato. Per prima cosa, a partire dal concetto di entropia, sviluppato nel contesto della teoria della informazione, si costruisce una misura del contenuto informativo di un testo (misura di Montemurro-Zanette); quindi, si presentano diversi esperimenti in cui viene misurata l'informazione di testi sottoposti a trasformazioni linguistiche di vario genere(lemmatizzazione, traduzione, eccetera). In particolare, l'applicazione al VMS di questa misura unita ad altre tecniche, ci permette di indagare la struttura tematica del manoscritto e le relazioni tra i suoi contenuti, verificando che esiste una continuità semantica tra pagine consecutive appartenenti a una stessa sezione. La grande quantità di hapax nel manoscritto ci porta poi a considerazioni di tipo morfologico: suggerisce infatti che la lingua del manoscritto sia particolarmente flessiva. La ricerca, in particolare, di sequenze di hapax consecutivi, ci porta a identificare -verosimilmente- alcuni nomi propri. Proprio per approfondire la morfologia della lingua si costruisce infine un grafo linguistico basato sostanzialmente sulla distanza di Hamming; confrontando la topologia di questi grafi per alcune lingue e per la lingua del VMS si osserva che quest'ultimo si distingue per maggiore densità e connessione. Traendo le conclusioni, i forti indizi a favore della presenza di un contenuto informativo nel testo confermano l'ipotesi che questo sia scritto in una vera lingua. Tuttavia, data la notevole semplicità delle regole di costruzione morfologiche, a nostro parere non sembra assimilabile ad una lingua naturale conosciuta, ma piuttosto ad una artificiale, creata appositamente per questo testo.
Resumo:
The report explores the problem of detecting complex point target models in a MIMO radar system. A complex point target is a mathematical and statistical model for a radar target that is not resolved in space, but exhibits varying complex reflectivity across the different bistatic view angles. The complex reflectivity can be modeled as a complex stochastic process whose index set is the set of all the bistatic view angles, and the parameters of the stochastic process follow from an analysis of a target model comprising a number of ideal point scatterers randomly located within some radius of the targets center of mass. The proposed complex point targets may be applicable to statistical inference in multistatic or MIMO radar system. Six different target models are summarized here – three 2-dimensional (Gaussian, Uniform Square, and Uniform Circle) and three 3-dimensional (Gaussian, Uniform Cube, and Uniform Sphere). They are assumed to have different distributions on the location of the point scatterers within the target. We develop data models for the received signals from such targets in the MIMO radar system with distributed assets and partially correlated signals, and consider the resulting detection problem which reduces to the familiar Gauss-Gauss detection problem. We illustrate that the target parameter and transmit signal have an influence on the detector performance through target extent and the SNR respectively. A series of the receiver operator characteristic (ROC) curves are generated to notice the impact on the detector for varying SNR. Kullback–Leibler (KL) divergence is applied to obtain the approximate mean difference between density functions the scatterers assume inside the target models to show the change in the performance of the detector with target extent of the point scatterers.
Resumo:
In this paper we prove a Lions-type compactness embedding result for symmetric unbounded domains of the Heisenberg group. The natural group action on the Heisenberg group TeX is provided by the unitary group U(n) × {1} and its appropriate subgroups, which will be used to construct subspaces with specific symmetry and compactness properties in the Folland-Stein’s horizontal Sobolev space TeX. As an application, we study the multiplicity of solutions for a singular subelliptic problem by exploiting a technique of solving the Rubik-cube applied to subgroups of U(n) × {1}. In our approach we employ concentration compactness, group-theoretical arguments, and variational methods.
Resumo:
Purpose: The aim of this work is to evaluate the geometric accuracy of a prerelease version of a new six degrees of freedom (6DoF) couch. Additionally, a quality assurance method for 6DoF couches is proposed. Methods: The main principle of the performance tests was to request a known shift for the 6DoF couch and to compare this requested shift with the actually applied shift by independently measuring the applied shift using different methods (graph paper, laser, inclinometer, and imaging system). The performance of each of the six axes was tested separately as well as in combination with the other axes. Functional cases as well as realistic clinical cases were analyzed. The tests were performed without a couch load and with a couch load of up to 200 kg and shifts in the range between −4 and +4 cm for the translational axes and between −3° and +3° for the rotational axes were applied. The quality assurance method of the new 6DoF couch was performed using a simple cube phantom and the imaging system. Results: The deviations (mean ± one standard deviation) accumulated over all performance tests between the requested shifts and the measurements of the applied shifts were −0.01 ± 0.02, 0.01 ± 0.02, and 0.01 ± 0.02 cm for the longitudinal, lateral, and vertical axes, respectively. The corresponding values for the three rotational axes couch rotation, pitch, and roll were 0.03° ± 0.06°, −0.04° ± 0.12°, and −0.01° ± 0.08°, respectively. There was no difference found between the tests with and without a couch load of up to 200 kg. Conclusions: The new 6DoF couch is able to apply requested shifts with high accuracy. It has the potential to be used for treatment techniques with the highest demands in patient setup accuracy such as those needed in stereotactic treatments. Shifts can be applied efficiently and automatically. Daily quality assurance of the 6DoF couch can be performed in an easy and efficient way. Long-term stability has to be evaluated in further tests.
Resumo:
The main method of proving the Craig Interpolation Property (CIP) constructively uses cut-free sequent proof systems. Until now, however, no such method has been known for proving the CIP using more general sequent-like proof formalisms, such as hypersequents, nested sequents, and labelled sequents. In this paper, we start closing this gap by presenting an algorithm for proving the CIP for modal logics by induction on a nested-sequent derivation. This algorithm is applied to all the logics of the so-called modal cube.
Resumo:
Redox-sensitive trace metals (Mn, Fe, U, Mo, Re), nutrients and terminal metabolic products (NO3-, NH4+, PO43-, total alkalinity) were for the first time investigated in pore waters of Antarctic coastal sediments. The results of this study reveal a high spatial variability in redox conditions in surface sediments from Potter Cove, King George Island, western Antarctic Peninsula. Particularly in the shallower areas of the bay the significant correlation between sulphate depletion and total alkalinity, the inorganic product of terminal metabolism, indicates sulphate reduction to be the major pathway of organic matter mineralisation. In contrast, dissimilatory metal oxide reduction seems to be prevailing in the newly ice-free areas and the deeper troughs, where concentrations of dissolved iron of up to 700 µM were found. We suggest that the increased accumulation of fine-grained material with high amounts of reducible metal oxides in combination with the reduced availability of metabolisable organic matter and enhanced physical and biological disturbance by bottom water currents, ice scouring and burrowing organisms favours metal oxide reduction over sulphate reduction in these areas. Based on modelled iron fluxes we calculate the contribution of the Antarctic shelf to the pool of potentially bioavailable iron (Feb) to be 6.9x10**3 to 790x10**3 t/yr. Consequently, these shelf sediments would provide an Feb flux of 0.35-39.5/mg/m**2/yr (median: 3.8 mg/m**2/yr) to the Southern Ocean. This contribution is in the same order of magnitude as the flux provided by icebergs and significantly higher than the input by aeolian dust. For this reason suboxic shelf sediments form a key source of iron for the high nutrient-low chlorophyll (HNLC) areas of the Southern Ocean. This source may become even more important in the future due to rising temperatures at the WAP accompanied by enhanced glacier retreat and the accumulation of melt water derived iron-rich material on the shelf.
Resumo:
The results of shore-based three-axis resistivity and X-ray computed tomography (CT) measurements on cube-shaped samples recovered during Leg 185 are presented along with moisture and density, P-wave velocity, resistivity, and X-ray CT measurements on whole-round samples of representative lithologies from Site 1149. These measurements augment the standard suite of physical properties obtained during Leg 185 from the cube samples and samples obtained adjacent to the cut cubes. Both shipboard and shore-based measurements of physical properties provide information that assists in characterizing lithologic units, correlating cored material with downhole logging data, understanding the nature of consolidation, and interpreting seismic reflection profiles.
Resumo:
The combined use of grain size and magnetic fabric analyses provides the ability to discriminate among depositional environments in deep-sea terrigenous sediments. We analyzed samples from three different depositional settings: turbidites, pelagic or hemipelagic interlayers, and sediment drifts. Results indicate that sediment samples from these different environments can be distinguished from each other on the basis of their median grain size, sorting, as well as the intensity and shape of magnetic fabric as determined from an examination of the anisotropy of magnetic susceptibility. We use these discriminators to interpret downcore samples from the Bermuda Rise sediment drift. We find that the finer grains of the Bermuda Rise (relative to the Blake Outer Ridge) do not result from lower depositional energy (current speed) and so may reflect a difference in the nature of sediment being delivered to the site (i.e., distance from source) between the two locations.
Resumo:
During Ocean Drilling Program Leg 188 to Prydz Bay, East Antarctica, several of the shipboard scientists formed the High-Resolution Integrated Stratigraphy Committee (HiRISC). The committee was established in order to furnish an integrated data set from the Pliocene portion of Site 1165 as a contribution to the ongoing debate about Pliocene climate and climate evolution in Antarctica. The proxies determined in our various laboratories were the following: magnetostratigraphy and magnetic properties, grain-size distributions (granulometry), near-ultraviolet, visible, and near-infrared spectrophotometry, calcium carbonate content, characteristics of foraminifer, diatom, and radiolarian content, clay mineral composition, and stable isotopes. In addition to the HiRISC samples, other data sets contained in this report are subsets of much larger data sets. We included these subsets in order to provide the reader with a convenient integrated data set of Pliocene-Pleistocene strata from the East Antarctic continental margin. The data are presented in the form of 14 graphs (in addition to the site map). Text and figure captions guide the reader to the original data sets. Some preliminary interpretations are given at the end of the manuscript.