849 resultados para geometric transformation
Resumo:
The work reported in this paper is motivated by the need to investigate general methods for pattern transformation. A formal definition for pattern transformation is provided and four special cases namely, elementary and geometric transformation based on repositioning all and some agents in the pattern are introduced. The need for a mathematical tool and simulations for visualizing the behavior of a transformation method is highlighted. A mathematical method based on the Moebius transformation is proposed. The transformation method involves discretization of events for planning paths of individual robots in a pattern. Simulations on a particle physics simulator are used to validate the feasibility of the proposed method.
Resumo:
The work reported in this paper is motivated by biomimetic inspiration - the transformation of patterns. The major issue addressed is the development of feasible methods for transformation based on a macroscopic tool. The general requirement for the feasibility of the transformation method is determined by classifying pattern formation approaches an their characteristics. A formal definition for pattern transformation is provided and four special cases namely, elementary and geometric transformation based on repositioning all and some robotic agents are introduced. A feasible method for transforming patterns geometrically, based on the macroscopic parameter operation of a swarm is considered. The transformation method is applied to a swarm model which lends itself to the transformation technique. Simulation studies are developed to validate the feasibility of the approach, and do indeed confirm the approach.
Resumo:
The work reported in this paper is motivated by the need to investigate general methods for pattern transformation. A formal definition for pattern transformation is provided and four special cases namely, elementary and geometric transformation based on repositioning all and some agents in the pattern are introduced. The need for a mathematical tool and simulations for visualizing the behavior of a transformation method is highlighted. A mathematical method based on the Moebius transformation is proposed. The transformation method involves discretization of events for planning paths of individual robots in a pattern. Simulations on a particle physics simulator are used to validate the feasibility of the proposed method.
Resumo:
This paper presents a registration method for images with global illumination variations. The method is based on a joint iterative optimization (geometric and photometric) of the L1 norm of the intensity error. Two strategies are compared to directly find the appropriate intensity transformation within each iteration: histogram specification and the solution obtained by analyzing the necessary optimality conditions. Such strategies reduce the search space of the joint optimization to that of the geometric transformation between the images.
Resumo:
O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor).
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
A macroscopically oriented double diamond inverse bicontinuous cubic phase (QIID) of the lipid glycerol monooleate is reversibly converted into a gyroid phase (QIIG). The initial QIID phase is prepared in the form of a film coating the inside of a capillary, deposited under flow, which produces a sample uniaxially oriented with a ⟨110⟩ axis parallel to the symmetry axis of the sample. A transformation is induced by replacing the water within the capillary tube with a solution of poly(ethylene glycol), which draws water out of the QIID sample by osmotic stress. This converts the QIID phase into a QIIG phase with two coexisting orientations, with the ⟨100⟩ and ⟨111⟩ axes parallel to the symmetry axis, as demonstrated by small-angle X-ray scattering. The process can then be reversed, to recover the initial orientation of QIID phase. The epitaxial relation between the two oriented mesophases is consistent with topologypreserving geometric pathways that have previously been hypothesized for the transformation. Furthermore, this has implications for the production of macroscopically oriented QIIG phases, in particular with applications as nanomaterial templates.
Resumo:
A macroscopically oriented inverse hexagonal phase (HII) of the lipid phytantriol in water is converted to an oriented inverse double diamond bicontinuous cubic phase (QIID). The initial HII phase is uniaxially oriented about the long axis of a capillary with the cylinders parallel to the capillary axis. The HII phase is converted by cooling to a QII D phase which is also highly oriented, where the cylindrical axis of the former phase has been converted to a ⟨110⟩ axis in the latter, as demonstrated by small-angle X-ray scattering. This epitaxial relationship allows us to discriminate between two competing proposed geometric pathways to convert HII to QIID. Our findings also suggest a new route to highly oriented cubic phase coatings, with applications as nanomaterial templates.
Resumo:
A continuous version of the hierarchical spherical model at dimension d=4 is investigated. Two limit distributions of the block spin variable X(gamma), normalized with exponents gamma = d + 2 and gamma=d at and above the critical temperature, are established. These results are proven by solving certain evolution equations corresponding to the renormalization group (RG) transformation of the O(N) hierarchical spin model of block size L(d) in the limit L down arrow 1 and N ->infinity. Starting far away from the stationary Gaussian fixed point the trajectories of these dynamical system pass through two different regimes with distinguishable crossover behavior. An interpretation of this trajectories is given by the geometric theory of functions which describe precisely the motion of the Lee-Yang zeroes. The large-N limit of RG transformation with L(d) fixed equal to 2, at the criticality, has recently been investigated in both weak and strong (coupling) regimes by Watanabe (J. Stat. Phys. 115:1669-1713, 2004) . Although our analysis deals only with N = infinity case, it complements various aspects of that work.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
It is known that the Camassa–Holm (CH) equation describes pseudo-spherical surfaces and that therefore its integrability properties can be studied by geometrical means. In particular, the CH equation admits nonlocal symmetries of “pseudo-potential type”: the standard quadratic pseudo-potential associated with the geodesics of the pseudo-spherical surfaces determined by (generic) solutions to CH, allows us to construct a covering π of the equation manifold of CH on which nonlocal symmetries can be explicitly calculated. In this article, we present the Lie algebra of (first-order) nonlocal π-symmetries for the CH equation, and we show that this algebra contains a semidirect sum of the loop algebra over sl(2,R) and the centerless Virasoro algebra. As applications, we compute explicit solutions, we construct a Darboux transformation for the CH equation, and we recover its recursion operator. We also extend our results to the associated Camassa–Holm equation introduced by J. Schiff.
Resumo:
Primary intraosseous carcinoma of the jaws (PIOSCC) might arise from odontogenic epithelium, more commonly from a previous odontogenic cyst. The aim of this case is to illustrate that the clinician should consider that an apparent benign dentigerous cyst can suffer malignant transformation and that all material removed from a patient must be evaluated histologically. A 44-year-old man presented in a routine periapical X-ray an impacted lower left third molar with radiolucency over its crown. Ten years later, the patient complained of pain in the same region and the tooth was extracted. After one month, the patient still complained of pain and suffered a fracture of the mandible. A biopsy was performed and carcinoma was diagnosed. The patient was treated surgically with adjuvant radio- and chemotherapy and after 8 years, he is well without signs of recurrences. This report describes a central mandibular carcinoma probably developed from a previous dentigerous cyst.
Resumo:
This study sought to analyse the behaviour of the average spinal posture using a novel investigative procedure in a maximal incremental effort test performed on a treadmill. Spine motion was collected via stereo-photogrammetric analysis in thirteen amateur athletes. At each time percentage of the gait cycle, the reconstructed spine points were projected onto the sagittal and frontal planes of the trunk. On each plane, a polynomial was fitted to the data, and the two-dimensional geometric curvature along the longitudinal axis of the trunk was calculated to quantify the geometric shape of the spine. The average posture presented at the gait cycle defined the spine Neutral Curve. This method enabled the lateral deviations, lordosis, and kyphosis of the spine to be quantified noninvasively and in detail. The similarity between each two volunteers was a maximum of 19% on the sagittal plane and 13% on the frontal (p<0.01). The data collected in this study can be considered preliminary evidence that there are subject-specific characteristics in spinal curvatures during running. Changes induced by increases in speed were not sufficient for the Neutral Curve to lose its individual characteristics, instead behaving like a postural signature. The data showed the descriptive capability of a new method to analyse spinal postures during locomotion; however, additional studies, and with larger sample sizes, are necessary for extracting more general information from this novel methodology.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Background: The cultivar Micro-Tom (MT) is regarded as a model system for tomato genetics due to its short life cycle and miniature size. However, efforts to improve tomato genetic transformation have led to protocols dependent on the costly hormone zeatin, combined with an excessive number of steps. Results: Here we report the development of a MT near-isogenic genotype harboring the allele Rg1 (MT-Rg1), which greatly improves tomato in vitro regeneration. Regeneration was further improved in MT by including a two-day incubation of cotyledonary explants onto medium containing 0.4 mu M 1-naphthaleneacetic acid (NAA) before cytokinin treatment. Both strategies allowed the use of 5 mu M 6-benzylaminopurine (BAP), a cytokinin 100 times less expensive than zeatin. The use of MT-Rg1 and NAA pre-incubation, followed by BAP regeneration, resulted in high transformation frequencies (near 40%), in a shorter protocol with fewer steps, spanning approximately 40 days from Agrobacterium infection to transgenic plant acclimatization. Conclusions: The genetic resource and the protocol presented here represent invaluable tools for routine gene expression manipulation and high throughput functional genomics by insertional mutagenesis in tomato.