975 resultados para anisotropic finite-size scaling
Resumo:
Defining the limits of an urban agglomeration is essential both for fundamental and applied studies in quantitative and theoretical geography. A simple and consistent way for defining such urban clusters is important for performing different statistical analysis and comparisons. Traditionally, agglomerations are defined using a rather qualitative approach based on various statistical measures. This definition varies generally from one country to another, and the data taken into account are different. In this paper, we explore the use of the City Clustering Algorithm (CCA) for the agglomeration definition in Switzerland. This algorithm provides a systemic and easy way to define an urban area based only on population data. The CCA allows the specification of the spatial resolution for defining the urban clusters. The results from different resolutions are compared and analysed, and the effect of filtering the data investigated. Different scales and parameters allow highlighting different phenomena. The study of Zipf's law using the visual rank-size rule shows that it is valid only for some specific urban clusters, inside a narrow range of the spatial resolution of the CCA. The scale where emergence of one main cluster occurs can also be found in the analysis using Zipf's law. The study of the urban clusters at different scales using the lacunarity measure - a complementary measure to the fractal dimension - allows to highlight the change of scale at a given range.
Resumo:
Electrical deep brain stimulation (DBS) is an efficient method to treat movement disorders. Many models of DBS, based mostly on finite elements, have recently been proposed to better understand the interaction between the electrical stimulation and the brain tissues. In monopolar DBS, clinically widely used, the implanted pulse generator (IPG) is used as reference electrode (RE). In this paper, the influence of the RE model of monopolar DBS is investigated. For that purpose, a finite element model of the full electric loop including the head, the neck and the superior chest is used. Head, neck and superior chest are made of simple structures such as parallelepipeds and cylinders. The tissues surrounding the electrode are accurately modelled from data provided by the diffusion tensor magnetic resonance imaging (DT-MRI). Three different configurations of RE are compared with a commonly used model of reduced size. The electrical impedance seen by the DBS system and the potential distribution are computed for each model. Moreover, axons are modelled to compute the area of tissue activated by stimulation. Results show that these indicators are influenced by the surface and position of the RE. The use of a RE model corresponding to the implanted device rather than the usually simplified model leads to an increase of the system impedance (+48%) and a reduction of the area of activated tissue (-15%).
Resumo:
Electrical Impedance Tomography (EIT) is an imaging method which enables a volume conductivity map of a subject to be produced from multiple impedance measurements. It has the potential to become a portable non-invasive imaging technique of particular use in imaging brain function. Accurate numerical forward models may be used to improve image reconstruction but, until now, have employed an assumption of isotropic tissue conductivity. This may be expected to introduce inaccuracy, as body tissues, especially those such as white matter and the skull in head imaging, are highly anisotropic. The purpose of this study was, for the first time, to develop a method for incorporating anisotropy in a forward numerical model for EIT of the head and assess the resulting improvement in image quality in the case of linear reconstruction of one example of the human head. A realistic Finite Element Model (FEM) of an adult human head with segments for the scalp, skull, CSF, and brain was produced from a structural MRI. Anisotropy of the brain was estimated from a diffusion tensor-MRI of the same subject and anisotropy of the skull was approximated from the structural information. A method for incorporation of anisotropy in the forward model and its use in image reconstruction was produced. The improvement in reconstructed image quality was assessed in computer simulation by producing forward data, and then linear reconstruction using a sensitivity matrix approach. The mean boundary data difference between anisotropic and isotropic forward models for a reference conductivity was 50%. Use of the correct anisotropic FEM in image reconstruction, as opposed to an isotropic one, corrected an error of 24 mm in imaging a 10% conductivity decrease located in the hippocampus, improved localisation for conductivity changes deep in the brain and due to epilepsy by 4-17 mm, and, overall, led to a substantial improvement on image quality. This suggests that incorporation of anisotropy in numerical models used for image reconstruction is likely to improve EIT image quality.
Resumo:
The aim of the present study was to determinate the cycle length of spermatogenesis in three species of shrew, Suncus murinus, Sorex coronatus and Sorex minutus, and to assess the relative influence of variation in basal metabolic rate (BMR) and mating system (level of sperm competition) on the observed rate of spermatogenesis, including data of shrew species studied before (Sorex araneus, Crocidura russula and Neomys fodiens). The dynamics of sperm production were determined by tracing 5-bromodeoxyuridine in the DNA of germ cells. As a continuous scaling of mating systems is not evident, the level of sperm competition was evaluated by the significantly correlated relative testis size (RTS). The cycle durations estimated by linear regression were 14.3 days (RTS 0.3%) in Suncus murinus, 9.0 days (RTS 0.5%) in Sorex coronatus and 8.5 days (RTS 2.8%) in Sorex minutus. In regression and multiple regression analyses including all six studied species of shrew, cycle length was significantly correlated with BMR (r2=0.73) and RTS (r2=0.77). Sperm competition as an ultimate factor obviously leads to a reduction in the time of spermatogenesis in order to increase sperm production. BMR may act in the same way, independently or as a proximate factor, revealed by the covariation, but other factors (related to testes size and thus to mating system) may also be involved.
Resumo:
Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best-fit line for the scaling relationship under scrutiny.
Resumo:
Design aspects of the Transversally Laminated Anisotropic (TLA) Synchronous Reluctance Motor (SynRM) are studied and the machine performance analysis compared to the Induction Motor (IM) is done. The SynRM rotor structure is designed and manufactured for a30 kW, four-pole, three-phase squirrel cage induction motor stator. Both the IMand SynRM were supplied by a sensorless Direct Torque Controlled (DTC) variablespeed drive. Attention is also paid to the estimation of the power range where the SynRM may compete successfully with a same size induction motor. A technicalloss reduction comparison between the IM and SynRM in variable speed drives is done. The Finite Element Method (FEM) is used to analyse the number, location and width of flux barriers used in a multiple segment rotor. It is sought for a high saliency ratio and a high torque of the motor. It is given a comparison between different FEM calculations to analyse SynRM performance. The possibility to take into account the effect of iron losses with FEM is studied. Comparison between the calculated and measured values shows that the design methods are reliable. A new application of the IEEE 112 measurement method is developed and used especially for determination of stray load losses in laboratory measurements. The study shows that, with some special measures, the efficiency of the TLA SynRM is equivalent to that of a high efficiency IM. The power factor of the SynRM at rated load is smaller than that of the IM. However, at lower partial load this difference decreases and this, probably, brings that the SynRM gets a better power factor in comparison with the IM. The big rotor inductance ratio of the SynRM allows a good estimating of the rotor position. This appears to be very advantageous for the designing of the rotor position sensor-less motor drive. In using the FEM designed multi-layer transversally laminated rotor with damper windings it is possible to design a directly network driven motor without degrading the motorefficiency or power factor compared to the performance of the IM.
Resumo:
Partial-thickness tears of the supraspinatus tendon frequently occur at its insertion on the greater tubercule of the humerus, causing pain and reduced strength and range of motion. The goal of this work was to quantify the loss of loading capacity due to tendon tears at the insertion area. A finite element model of the supraspinatus tendon was developed using in vivo magnetic resonance images data. The tendon was represented by an anisotropic hyperelastic constitutive law identified with experimental measurements. A failure criterion was proposed and calibrated with experimental data. A partial-thickness tear was gradually increased, starting from the deep articular-sided fibres. For different values of tendon tear thickness, the tendon was mechanically loaded up to failure. The numerical model predicted a loss in loading capacity of the tendon as the tear thickness progressed. Tendon failure was more likely when the tendon tear exceeded 20%. The predictions of the model were consistent with experimental studies. Partial-thickness tears below 40% tear are sufficiently stable to persist physiotherapeutic exercises. Above 60% tear surgery should be considered to restore shoulder strength.
Resumo:
To predict the capacity of the structure or the point which is followed by instability, calculation of the critical crack size is important. Structures usually contain several cracks but not necessarily all of these cracks lead to failure or reach the critical size. So, defining the harmful cracks or the crack size which is the most leading one to failure provides criteria for structure’s capacity at elevated temperature. The scope of this thesis was to calculate fracture parameters like stress intensity factor, the J integral and plastic and ultimate capacity of the structure to estimate critical crack size for this specific structure. Several three dimensional (3D) simulations using finite element method by Ansys program and boundary element method by Frank 3D program were carried out to calculate fracture parameters and results with the aid of laboratory tests (loaddisplacement curve, the J resistance curve and yield or ultimate stress) leaded to extract critical size of the crack. Two types of the fracture which is usually affected by temperature, Elastic and Elasti-Plastic fractures were simulated by performing several linear elastic and nonlinear elastic analyses. Geometry details of the weldment; flank angle and toe radius were also studied independently to estimate the location of crack initiation and simulate stress field in early stages of crack extension in structure. In this work also overview of the structure’s capacity in room temperature (20 ºC) was studied. Comparison of the results in different temperature (20 ºC and -40 ºC) provides a threshold of the structure’s behavior within the defined range.
Resumo:
This thesis presents point-contact measurements between superconductors (Nb, Ta, Sn,Al, Zn) and ferromagnets (Co, Fe, Ni) as well as non-magnetic metals (Ag, Au, Cu, Pt).The point contacts were fabricated using the shear method. The differential resistanceof the contacts was measured either in liquid He at 4.2 K or in vacuum in a dilutionrefrigerator at varying temperature down to 0.1 K. The contact properties were investigatedas function of size and temperature. The measured Andreev-reflection spectrawere analysed in the framework of the BTK model – a three parameter model that describescurrent transport across a superconductor - normal conductor interface. Theoriginal BTK model was modified to include the effects of spin polarization or finitelifetime of the Cooper pairs. Our polarization values for the ferromagnets at 4.2 K agree with the literature data, but the analysis was ambiguous because the experimental spectra both with ferromagnets and non-magnets could be described equally well either with spin polarization or finite lifetime effects in the BTK model. With the polarization model the Z parametervaries from almost 0 to 0.8 while the lifetime model produces Z values close to 0.5. Measurements at lower temperatures partly lift this ambiguity because the magnitude of thermal broadening is small enough to separate lifetime broadening from the polarization. The reduced magnitude of the superconducting anomalies for Zn-Fe contacts required an additional modification of the BTK model which was implemented as a scaling factor. Adding this parameter led to reduced polarization values. However, reliable data is difficult to obtain because different parameter sets produce almost identical spectra.
Resumo:
The aim of this work was to calibrate the material properties including strength and strain values for different material zones of ultra-high strength steel (UHSS) welded joints under monotonic static loading. The UHSS is heat sensitive and softens by heat due to welding, the affected zone is heat affected zone (HAZ). In this regard, cylindrical specimens were cut out from welded joints of Strenx® 960 MC and Strenx® Tube 960 MH, were examined by tensile test. The hardness values of specimens’ cross section were measured. Using correlations between hardness and strength, initial material properties were obtained. The same size specimen with different zones of material same as real specimen were created and defined in finite element method (FEM) software with commercial brand Abaqus 6.14-1. The loading and boundary conditions were defined considering tensile test values. Using initial material properties made of hardness-strength correlations (true stress-strain values) as Abaqus main input, FEM is utilized to simulate the tensile test process. By comparing FEM Abaqus results with measured results of tensile test, initial material properties will be revised and reused as software input to be fully calibrated in such a way that FEM results and tensile test results deviate minimum. Two type of different S960 were used including 960 MC plates, and structural hollow section 960 MH X-joint. The joint is welded by BöhlerTM X96 filler material. In welded joints, typically the following zones appear: Weld (WEL), Heat affected zone (HAZ) coarse grained (HCG) and fine grained (HFG), annealed zone, and base material (BaM). Results showed that: The HAZ zone is softened due to heat input while welding. For all the specimens, the softened zone’s strength is decreased and makes it a weakest zone where fracture happens while loading. Stress concentration of a notched specimen can represent the properties of notched zone. The load-displacement diagram from FEM modeling matches with the experiments by the calibrated material properties by compromising two correlations of hardness and strength.
Resumo:
The allometric scaling relationship observed between metabolic rate (MR) and species body mass can be partially explained by differences in cellular MR (Porter & Brand, 1995). Here, I studied cultured cell lines derived from ten mammalian species to determine whether cells propagated in an identical environment exhibited MR scaling. Oxidative and anaerobic metabolic parameters did not scale significantly with donor body mass in cultured cells, indicating the absence of an intrinsic MR setpoint. The rate of oxygen delivery has been proposed to limit cellular metabolic rates in larger organisms (West et al., 2002). As such cells were cultured under a variety of physiologically relevant oxygen tensions to investigate the effect of oxygen on cellular metabolic rates. Exposure to higher medium oxygen tensions resulted in increased metabolic rates in all cells. Higher MRs have the potential to produce more reactive oxygen species (ROS) which could cause genomic instability and thus reduced lifespan. Longer-lived species are more resistant to oxidative stress (Kapahi et al, 1999), which may be due to greater antioxidant and/or DNA repair capacities. This hypothesis was addressed by culturing primary dermal fibroblasts from eight mammalian species ranging in maximum lifespan from 5 to 120 years. Only the antioxidant manganese superoxide dismutases (MnSOD) positively scaled with species lifespan (p<0.01). Oxidative damage to DNA is primarily repaired by the base excision repair (BER) pathway. BER enzyme activities showed either no correlation or as in the case of polymerase p correlated, negatively with donor species (p<0.01 ). Typically, mammalian cells are cultured in a 20% O2 (atmospheric) environment, which is several-fold higher than cells experience in vivo. Therefore, the secondary aim of this study was to determine the effect of culturing mammalian cells at a more physiological oxygen tension (3%) on BER, and antioxidant, enzyme activities. Consistently, standard culture conditions induce higher antioxidant and DNA ba.se excision repair activities than are present under a more physiological oxygen concentration. Therefore, standard culture conditions are inappropriate for studies of oxidative stress-induced activities and species differences in fibroblast DNA BER repair capacities may represent differences in ability to respond to oxidative stress. An interesting outcome firom this study was that some inherent cellular properties are maintained in culture (i.e. stress responses) while others are not (i.e. MR).
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size normality tests for the disturbances of standard linear regression, which is based on using the technique of Monte Carlo tests.
Resumo:
There are many ways to generate geometrical models for numerical simulation, and most of them start with a segmentation step to extract the boundaries of the regions of interest. This paper presents an algorithm to generate a patient-specific three-dimensional geometric model, based on a tetrahedral mesh, without an initial extraction of contours from the volumetric data. Using the information directly available in the data, such as gray levels, we built a metric to drive a mesh adaptation process. The metric is used to specify the size and orientation of the tetrahedral elements everywhere in the mesh. Our method, which produces anisotropic meshes, gives good results with synthetic and real MRI data. The resulting model quality has been evaluated qualitatively and quantitatively by comparing it with an analytical solution and with a segmentation made by an expert. Results show that our method gives, in 90% of the cases, as good or better meshes as a similar isotropic method, based on the accuracy of the volume reconstruction for a given mesh size. Moreover, a comparison of the Hausdorff distances between adapted meshes of both methods and ground-truth volumes shows that our method decreases reconstruction errors faster. Copyright © 2015 John Wiley & Sons, Ltd.
Resumo:
Warships are generally sleek, slender with V shaped sections and block coefficient below 0.5, compared to fuller forms and higher values for commercial ships. They normally operate in the higher Froude number regime, and the hydrodynamic design is primarily aimed at achieving higher speeds with the minimum power. Therefore the structural design and analysis methods are different from those for commercial ships. Certain design guidelines have been given in documents like Naval Engineering Standards and one of the new developments in this regard is the introduction of classification society rules for the design of warships.The marine environment imposes subjective and objective uncertainties on ship structure. The uncertainties in loads, material properties etc.,. make reliable predictions of ship structural response a difficult task. Strength, stiffness and durability criteria for warship structures can be established by investigations on elastic analysis, ultimate strength analysis and reliability analysis. For analysis of complicated warship structures, special means and valid approximations are required.Preliminary structural design of a frigate size ship has been carried out . A finite element model of the hold model, representative of the complexities in the geometric configuration has been created using the finite element software NISA. Two other models representing the geometry to a limited extent also have been created —- one with two transverse frames and the attached plating alongwith the longitudinal members and the other representing the plating and longitudinal stiffeners between two transverse frames. Linear static analysis of the three models have been carried out and each one with three different boundary conditions. The structural responses have been checked for deflections and stresses against the permissible values. The structure has been found adequate in all the cases. The stresses and deflections predicted by the frame model are comparable with those of the hold model. But no such comparison has been realized for the interstiffener plating model with the other two models.Progressive collapse analyses of the models have been conducted for the three boundary conditions, considering geometric nonlinearity and then combined geometric and material nonlinearity for the hold and the frame models. von Mises — lllyushin yield criteria with elastic-perfectly plastic stress-strain curve has been chosen. ln each case, P-Delta curves have been generated and the ultimate load causing failure (ultimate load factor) has been identified as a multiple of the design load specified by NES.Reliability analysis of the hull module under combined geometric and material nonlinearities have been conducted. The Young's Modulus and the shell thickness have been chosen as the variables. Randomly generated values have been used in the analysis. First Order Second Moment has been used to predict the reliability index and thereafter, the probability of failure. The values have been compared against standard values published in literature.