89 resultados para ISOTROPY
Resumo:
In this paper, we present an algebraic method to study and design spatial parallel manipulators that demonstrate isotropy in the force and moment distributions. We use the force and moment transformation matrices separately, and derive conditions for their isotropy individually as well as in combination. The isotropy conditions are derived in closed-form in terms of the invariants of the quadratic forms associated with these matrices. The formulation is applied to a class of Stewart platform manipulator, and a multi-parameter family of isotropic manipulators is identified analytically. We show that it is impossible to obtain a spatially isotropic configuration within this family. We also compute the isotropic configurations of an existing manipulator and demonstrate a procedure for designing the manipulator for isotropy at a given configuration. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
We revise and extend the extreme value statistic, introduced in Gupta et al., to study direction dependence in the high-redshift supernova data, arising either from departures, from the cosmological principle or due to direction-dependent statistical systematics in the data. We introduce a likelihood function that analytically marginalizes over the,Hubble constant and use it to extend our previous statistic. We also introduce a new statistic that is sensitive to direction dependence arising from living off-centre inside a large void as well as from previously mentioned reasons for anisotropy. We show that for large data sets, this statistic has a limiting form that can be computed analytically. We apply our statistics to the gold data sets from Riess et al., as in our previous work. Our revision and extension of the previous statistic show that the effect of marginalizing over the Hubble constant instead of using its best-fitting value on our results is only marginal. However, correction of errors in our previous work reduces the level of non-Gaussianity in the 2004 gold data that were found in our earlier work. The revised results for the 2007 gold data show that the data are consistent with isotropy and Gaussianity. Our second statistic confirms these results.
Resumo:
In this paper, we present an algebraic method to study and design spatial parallel manipulators that demonstrate isotropy in the force and moment distributions. We use the force and moment transformation matrices separately, and derive conditions for their isotropy individually as well as in combination. The isotropy conditions are derived in closed-form in terms of the invariants of the quadratic forms associated with these matrices. The formulation is applied to a class of Stewart platform manipulator, and a multi-parameter family of isotropic manipulators is identified analytically. We show that it is impossible to obtain a spatially isotropic configuration within this family. We also compute the isotropic configurations of an existing manipulator and demonstrate a procedure for designing the manipulator for isotropy at a given configuration.
Resumo:
In this paper, we present an algebraic method to study and design spatial parallel manipulators that demonstrate isotropy in the force and moment distributions.We use the force and moment transformation matrices separately,and derive conditions for their isotropy individually as well as in combination. The isotropy conditions are derived in closed-form in terms of the invariants of the quadratic forms associated with these matrices. The formulation has been applied to a class of Stewart platform manipulators. We obtain multi-parameter families of isotropic manipulator analytically. In addition to computing the isotropic configurations of an existing manipulator,we demonstrate a procedure for designing the manipulator for isotropy at a given configuration.
Resumo:
One comes across directions as the observations in a number of situations. The first inferential question that one should answer when dealing with such data is, “Are they isotropic or uniformly distributed?” The answer to this question goes back in history which we shall retrace a bit and provide an exact and approximate solution to this so-called “Pearson’s Random Walk” problem.
Resumo:
Fractal with microscopic anisotropy shows a unique type of macroscopic isotropy restoration phenomenon that is absent in Euclidean space [M. T. Barlow et al., Phys. Rev. Lett. 75, 3042]. In this paper the isotropy restoration feature is considered for a family of two-dimensional Sierpinski gasket type fractal resistor networks. A parameter xi is introduced to describe this phenomenon. Our numerical results show that xi satisfies the scaling law xi similar to l(-alpha), where l is the system size and alpha is an exponent independent of the degree of microscopic anisotropy, characterizing the isotropy restoration feature of the fractal systems. By changing the underlying fractal structure towards the Euclidean triangular lattice through increasing the side length b of the gasket generators, the fractal-to-Euclidean crossover behavior of the isotropy restoration feature is discussed.
Resumo:
The objectives of the study were to assess changes in fine root anisotropy and specific root lengths throughout the development of Eucalyptus grandis ( W. Hill ex Maiden) plantations and to establish a predictive model of root length density (RLD) from root intercept counts on trench walls. Fine root densities (<1 mm in diameter) were studied in 6-, 12-, 22-, 28-, 54-, 68- and 72-month-old E. grandis plantations established on deep Ferralsols in southern Brazil. Fine root intercepts were counted on 3 faces of 90-198 soil cubes (1 dm(3) in volume) in each stand and fine root lengths (L) were measured inside 576 soil cubes, sampled between the depths of 10 cm and 290 cm. The number of fine root intercepts was counted on one vertical face perpendicular to the planting row (N(t)), one vertical face parallel to the planting row (N(l)) and one horizontal face (N(h)), for each soil cube sampled. An overall isotropy of fine roots was shown by paired Student's t-tests between the numbers of fine roots intersecting each face of soil cubes at most stand ages and soil depths. Specific root lengths decreased with stand age in the upper soil layers and tended to increase in deep soil layers at the end of the rotation. A linear regression established between N(t) and L for all the soil cubes sampled accounted for 36% of the variability of L. Such a regression computed for mean Nt and L values at each sampling depth and stand age explained only 55% of the variability, as a result of large differences in the relationship between L and Nt depending on stand productivity. The equation RLD=1.89*LAI*N(t), where LAI was the stand leaf area index (m(2) m(-2)) and Nt was expressed as the number of root intercepts per cm(2), made it possible to predict accurately (R(2)=0.84) and without bias the mean RLDs (cm cm(-3)) per depth in each stand, for the whole data set of 576 soil cubes sampled between 2 years of age and the end of the rotation.
Resumo:
We review the basic hypotheses which motivate the statistical framework used to analyze the cosmic microwave background, and how that framework can be enlarged as we relax those hypotheses. In particular, we try to separate as much as possible the questions of gaussianity, homogeneity, and isotropy from each other. We focus both on isotropic estimators of nongaussianity as well as statistically anisotropic estimators of gaussianity, giving particular emphasis on their signatures and the enhanced cosmic variances that become increasingly important as our putative Universe becomes less symmetric. After reviewing the formalism behind some simple model-independent tests, we discuss how these tests can be applied to CMBdata when searching for large-scale anomalies. Copyright © 2010 L. Raul Abramo and Thiago S. Pereira.
Resumo:
We consider a generalized discriminant associated to a symmetric space which generalizes the discriminant of real symmetric matrices, and note that it can be written as a sum of squares of real polynomials. A method to estimate the minimum number of squares required to represent the discrimininant is developed and applied in examples.
Resumo:
The aim of this study was to develop a GST-based methodology for accurately measuring the degree of transverse isotropy in trabecular bone. Using femoral sub-regions scanned in high-resolution peripheral QCT (HR-pQCT) and clinical-level-resolution QCT, trabecular orientation was evaluated using the mean intercept length (MIL) and the gradient structure tensor (GST) on the HR-pQCT and QCT data, respectively. The influence of local degree of transverse isotropy (DTI) and bone mineral density (BMD) was incorporated into the investigation. In addition, a power based model was derived, rendering a 1:1 relationship between GST and MIL eigenvalues. A specific DTI threshold (DTI thres) was found for each investigated size of region of interest (ROI), above which the estimate of major trabecular direction of the GST deviated no more than 30° from the gold standard MIL in 95% of the remaining ROIs (mean error: 16°). An inverse relationship between ROI size and DTI thres was found for discrete ranges of BMD. A novel methodology has been developed, where transversal isotropic measures of trabecular bone can be obtained from clinical QCT images for a given ROI size, DTI thres and power coefficient. Including DTI may improve future clinical QCT finite-element predictions of bone strength and diagnoses of bone disease.
Resumo:
The flood flow in urbanised areas constitutes a major hazard to the population and infrastructure as seen during the summer 2010-2011 floods in Queensland (Australia). Flood flows in urban environments have been studied relatively recently, although no study considered the impact of turbulence in the flow. During the 12-13 January 2011 flood of the Brisbane River, some turbulence measurements were conducted in an inundated urban environment in Gardens Point Road next to Brisbane's central business district (CBD) at relatively high frequency (50 Hz). The properties of the sediment flood deposits were characterised and the acoustic Doppler velocimeter unit was calibrated to obtain both instantaneous velocity components and suspended sediment concentration in the same sampling volume with the same temporal resolution. While the flow motion in Gardens Point Road was subcritical, the water elevations and velocities fluctuated with a distinctive period between 50 and 80 s. The low frequency fluctuations were linked with some local topographic effects: i.e, some local choke induced by an upstream constriction between stairwells caused some slow oscillations with a period close to the natural sloshing period of the car park. The instantaneous velocity data were analysed using a triple decomposition, and the same triple decomposition was applied to the water depth, velocity flux, suspended sediment concentration and suspended sediment flux data. The velocity fluctuation data showed a large energy component in the slow fluctuation range. For the first two tests at z = 0.35 m, the turbulence data suggested some isotropy. At z = 0.083 m, on the other hand, the findings indicated some flow anisotropy. The suspended sediment concentration (SSC) data presented a general trend with increasing SSC for decreasing water depth. During a test (T4), some long -period oscillations were observed with a period about 18 minutes. The cause of these oscillations remains unknown to the authors. The last test (T5) took place in very shallow waters and high suspended sediment concentrations. It is suggested that the flow in the car park was disconnected from the main channel. Overall the flow conditions at the sampling sites corresponded to a specific momentum between 0.2 to 0.4 m2 which would be near the upper end of the scale for safe evacuation of individuals in flooded areas. But the authors do not believe the evacuation of individuals in Gardens Point Road would have been safe because of the intense water surges and flow turbulence. More generally any criterion for safe evacuation solely based upon the flow velocity, water depth or specific momentum cannot account for the hazards caused by the flow turbulence, water depth fluctuations and water surges.
Resumo:
During the late 20th century it was proposed that a design aesthetic reflecting current ecological concerns was required within the overall domain of the built environment and specifically within landscape design. To address this, some authors suggested various theoretical frameworks upon which such an aesthetic could be based. Within these frameworks there was an underlying theme that the patterns and processes of Nature may have the potential to form this aesthetic — an aesthetic based on fractal rather than Euclidean geometry. In order to understand how fractal geometry, described as the geometry of Nature, could become the referent for a design aesthetic, this research examines the mathematical concepts of fractal Geometry, and the underlying philosophical concepts behind the terms ‘Nature’ and ‘aesthetics’. The findings of this initial research meant that a new definition of Nature was required in order to overcome the barrier presented by the western philosophical Nature¯culture duality. This new definition of Nature is based on the type and use of energy. Similarly, it became clear that current usage of the term aesthetics has more in common with the term ‘style’ than with its correct philosophical meaning. The aesthetic philosophy of both art and the environment recognises different aesthetic criteria related to either the subject or the object, such as: aesthetic experience; aesthetic attitude; aesthetic value; aesthetic object; and aesthetic properties. Given these criteria, and the fact that the concept of aesthetics is still an active and ongoing philosophical discussion, this work focuses on the criteria of aesthetic properties and the aesthetic experience or response they engender. The examination of fractal geometry revealed that it is a geometry based on scale rather than on the location of a point within a three-dimensional space. This enables fractal geometry to describe the complex forms and patterns created through the processes of Wild Nature. Although fractal geometry has been used to analyse the patterns of built environments from a plan perspective, it became clear from the initial review of the literature that there was a total knowledge vacuum about the fractal properties of environments experienced every day by people as they move through them. To overcome this, 21 different landscapes that ranged from highly developed city centres to relatively untouched landscapes of Wild Nature have been analysed. Although this work shows that the fractal dimension can be used to differentiate between overall landscape forms, it also shows that by itself it cannot differentiate between all images analysed. To overcome this two further parameters based on the underlying structural geometry embedded within the landscape are discussed. These parameters are the Power Spectrum Median Amplitude and the Level of Isotropy within the Fourier Power Spectrum. Based on the detailed analysis of these parameters a greater understanding of the structural properties of landscapes has been gained. With this understanding, this research has moved the field of landscape design a step close to being able to articulate a new aesthetic for ecological design.
Resumo:
X-ray microtomography (micro-CT) with micron resolution enables new ways of characterizing microstructures and opens pathways for forward calculations of multiscale rock properties. A quantitative characterization of the microstructure is the first step in this challenge. We developed a new approach to extract scale-dependent characteristics of porosity, percolation, and anisotropic permeability from 3-D microstructural models of rocks. The Hoshen-Kopelman algorithm of percolation theory is employed for a standard percolation analysis. The anisotropy of permeability is calculated by means of the star volume distribution approach. The local porosity distribution and local percolation probability are obtained by using the local porosity theory. Additionally, the local anisotropy distribution is defined and analyzed through two empirical probability density functions, the isotropy index and the elongation index. For such a high-resolution data set, the typical data sizes of the CT images are on the order of gigabytes to tens of gigabytes; thus an extremely large number of calculations are required. To resolve this large memory problem parallelization in OpenMP was used to optimally harness the shared memory infrastructure on cache coherent Non-Uniform Memory Access architecture machines such as the iVEC SGI Altix 3700Bx2 Supercomputer. We see adequate visualization of the results as an important element in this first pioneering study.