940 resultados para Metric cone
Resumo:
Science, technology, engineering, and mathematics (STEM) education is an emerging initiative in Australia, particularly in primary schools. This qualitative research aimed to understand Year 4 students' involvement in an integrated STEM education unit that focused on science concepts (e.g., states of matter, testing properties of materials) and mathematics concepts (e.g., 3D shapes and metric measurements) for designing, making and testing a strong and safe medical kit to insulate medicines (ice cubes) at desirable temperatures. Data collection tools included student work samples, photographs, written responses from students and the teacher, and researcher notes. In a post-hoc analysis, a pedagogical knowledge practice framework (i.e., planning, timetabling, preparation, teaching strategies, content knowledge, problem solving, classroom management, questioning, implementation, assessment, and viewpoints) was used to explain links to student outcomes in STEM education. The study showed how pedagogical knowledge practices may be linked to student outcomes (knowledge, understanding, skill development, and values and attitudes) for a STEM education activity.
Resumo:
Volcanic eruption centres of the mostly 4.5 Ma-5000 BP Newer Volcanics Province in the Hamilton area of southeastern Australia were examined in detail using a multifaceted approach, including ground truthing and analysis of ArcGIS Total Magnetic Intensity and seamless geology data, NASA Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) digital elevation models and Google Earth satellite image interpretation. Sixteen eruption centres were recognised in the Hamilton area, including three previously unrecorded volcanoes-one of which, the Cas Maar, constitutes the northernmost maar-cone volcanic complex in the Western Plains subprovince. Seven previously allocated eruption centres were placed into question based on field and laboratory observations. Three phases of volcanic activity have been suggested by other authors and are interpreted to correlate with ages of >4 Ma, ca 2 Ma and <0.5 Ma, which may be further subdivided based on preservation of outcrop. Geochemical compositions of the dominantly basaltic products become increasingly alkaline and enriched in incompatible elements from Phases 1 to 2, with Phase 3 eruptions both covering the entire geochemical range and extending into increasingly enriched compositions. This research highlights the importance of a multifaceted approach to landform mapping and demonstrates that additional volcanic centres may yet be discovered in the Newer Volcanics Province
Resumo:
This study proposes an optimized approach of designing in which a model specially shaped composite tank for spacecrafts is built by applying finite element analysis. The composite layers are preliminarily designed by combining quasi-network design method with numerical simulation, which determines the ratio between the angle and the thickness of layers as the initial value of the optimized design. By adopting an adaptive simulated annealing algorithm, the angles and the numbers of layers at each angle are optimized to minimize the weight of structure. Based on this, the stacking sequence of composite layers is formulated according to the number of layers in the optimized structure by applying the enumeration method and combining the general design parameters. Numerical simulation is finally adopted to calculate the buckling limit of tanks in different designing methods. This study takes a composite tank with a cone-shaped cylinder body as example, in which ellipsoid head section and outer wall plate are selected as the object to validate this method. The result shows that the quasi-network design method can improve the design quality of composite material layer in tanks with complex preliminarily loading conditions. The adaptive simulated annealing algorithm can reduce the initial design weight by 30%, which effectively probes the global optimal solution and optimizes the weight of structure. It can be therefore proved that, this optimization method is capable of designing and optimizing specially shaped composite tanks with complex loading conditions.
Resumo:
This study uses the reverse salient methodology to contrast subsystems in video game consoles in order to discover, characterize, and forecast the most significant technology gap. We build on the current methodologies (Performance Gap and Time Gap) for measuring the magnitude of Reverse Salience, by showing the effectiveness of Performance Gap Ratio (PGR). The three subject subsystems in this analysis are the CPU Score, GPU core frequency, and video memory bandwidth. CPU Score is a metric developed for this project, which is the product of the core frequency, number of parallel cores, and instruction size. We measure the Performance Gap of each subsystem against concurrently available PC hardware on the market. Using PGR, we normalize the evolution of these technologies for comparative analysis. The results indicate that while CPU performance has historically been the Reverse Salient, video memory bandwidth has taken over as the quickest growing technology gap in the current generation. Finally, we create a technology forecasting model that shows how much the video RAM bandwidth gap will grow through 2019 should the current trend continue. This analysis can assist console developers in assigning resources to the next generation of platforms, which will ultimately result in longer hardware life cycles.
Resumo:
We propose a new information-theoretic metric, the symmetric Kullback-Leibler divergence (sKL-divergence), to measure the difference between two water diffusivity profiles in high angular resolution diffusion imaging (HARDI). Water diffusivity profiles are modeled as probability density functions on the unit sphere, and the sKL-divergence is computed from a spherical harmonic series, which greatly reduces computational complexity. Adjustment of the orientation of diffusivity functions is essential when the image is being warped, so we propose a fast algorithm to determine the principal direction of diffusivity functions using principal component analysis (PCA). We compare sKL-divergence with other inner-product based cost functions using synthetic samples and real HARDI data, and show that the sKL-divergence is highly sensitive in detecting small differences between two diffusivity profiles and therefore shows promise for applications in the nonlinear registration and multisubject statistical analysis of HARDI data.
Resumo:
We apply an information-theoretic cost metric, the symmetrized Kullback-Leibler (sKL) divergence, or $J$-divergence, to fluid registration of diffusion tensor images. The difference between diffusion tensors is quantified based on the sKL-divergence of their associated probability density functions (PDFs). Three-dimensional DTI data from 34 subjects were fluidly registered to an optimized target image. To allow large image deformations but preserve image topology, we regularized the flow with a large-deformation diffeomorphic mapping based on the kinematics of a Navier-Stokes fluid. A driving force was developed to minimize the $J$-divergence between the deforming source and target diffusion functions, while reorienting the flowing tensors to preserve fiber topography. In initial experiments, we showed that the sKL-divergence based on full diffusion PDFs is adaptable to higher-order diffusion models, such as high angular resolution diffusion imaging (HARDI). The sKL-divergence was sensitive to subtle differences between two diffusivity profiles, showing promise for nonlinear registration applications and multisubject statistical analysis of HARDI data.
Resumo:
Head motion (HM) is a critical confounding factor in functional MRI. Here we investigate whether HM during resting state functional MRI (RS-fMRI) is influenced by genetic factors in a sample of 462 twins (65% fema≤ 101 MZ (monozygotic) and 130 DZ (dizygotic) twin pairs; mean age: 21 (SD=3.16), range 16-29). Heritability estimates for three HM components-mean translation (MT), maximum translation (MAXT) and mean rotation (MR)-ranged from 37 to 51%. We detected a significant common genetic influence on HM variability, with about two-thirds (genetic correlations range 0.76-1.00) of the variance shared between MR, MT and MAXT. A composite metric (HM-PC1), which aggregated these three, was also moderately heritable (h2=42%). Using a sub-sample (N=35) of the twins we confirmed that mean and maximum translational and rotational motions were consistent "traits" over repeated scans (r=0.53-0.59); reliability was even higher for the composite metric (r=0.66). In addition, phenotypic and cross-trait cross-twin correlations between HM and resting state functional connectivities (RS-FCs) with Brodmann areas (BA) 44 and 45, in which RS-FCs were found to be moderately heritable (BA44: h2-=0.23 (sd=0.041), BA45: h2-=0.26 (sd=0.061)), indicated that HM might not represent a major bias in genetic studies using FCs. Even so, the HM effect on FC was not completely eliminated after regression. HM may be a valuable endophenotype whose relationship with brain disorders remains to be elucidated.
Resumo:
Automatic labeling of white matter fibres in diffusion-weighted brain MRI is vital for comparing brain integrity and connectivity across populations, but is challenging. Whole brain tractography generates a vast set of fibres throughout the brain, but it is hard to cluster them into anatomically meaningful tracts, due to wide individual variations in the trajectory and shape of white matter pathways. We propose a novel automatic tract labeling algorithm that fuses information from tractography and multiple hand-labeled fibre tract atlases. As streamline tractography can generate a large number of false positive fibres, we developed a top-down approach to extract tracts consistent with known anatomy, based on a distance metric to multiple hand-labeled atlases. Clustering results from different atlases were fused, using a multi-stage fusion scheme. Our "label fusion" method reliably extracted the major tracts from 105-gradient HARDI scans of 100 young normal adults. © 2012 Springer-Verlag.
Resumo:
We study the influence of the choice of template in tensor-based morphometry. Using 3D brain MR images from 10 monozygotic twin pairs, we defined a tensor-based distance in the log-Euclidean framework [1] between each image pair in the study. Relative to this metric, twin pairs were found to be closer to each other on average than random pairings, consistent with evidence that brain structure is under strong genetic control. We also computed the intraclass correlation and associated permutation p-value at each voxel for the determinant of the Jacobian matrix of the transformation. The cumulative distribution function (cdf) of the p-values was found at each voxel for each of the templates and compared to the null distribution. Surprisingly, there was very little difference between CDFs of statistics computed from analyses using different templates. As the brain with least log-Euclidean deformation cost, the mean template defined here avoids the blurring caused by creating a synthetic image from a population, and when selected from a large population, avoids bias by being geometrically centered, in a metric that is sensitive enough to anatomical similarity that it can even detect genetic affinity among anatomies.
Resumo:
To date, a number of two-dimensional (2D) topological insulators (TIs) have been realized in Group 14 elemental honeycomb lattices, but all are inversionsymmetric. Here, based on first-principles calculations, we predict a new family of 2D inversion-asymmetric TIs with sizeable bulk gaps from 105 meV to 284 meV, in X2–GeSn (X = H, F, Cl, Br, I) monolayers, making them in principle suitable for room-temperature applications. The nontrivial topological characteristics of inverted band orders are identified in pristine X2–GeSn with X = (F, Cl, Br, I), whereas H2–GeSn undergoes a nontrivial band inversion at 8% lattice expansion. Topologically protected edge states are identified in X2–GeSn with X = (F, Cl, Br, I), as well as in strained H2–GeSn. More importantly, the edges of these systems, which exhibit single-Dirac-cone characteristics located exactly in the middle of their bulk band gaps, are ideal for dissipationless transport. Thus, Group 14 elemental honeycomb lattices provide a fascinating playground for the manipulation of quantum states.
Resumo:
Purpose To develop a signal processing paradigm for extracting ERG responses to temporal sinusoidal modulation with contrasts ranging from below perceptual threshold to suprathreshold contrasts. To estimate the magnitude of intrinsic noise in ERG signals at different stimulus contrasts. Methods Photopic test stimuli were generated using a 4-primary Maxwellian view optical system. The 4-primary lights were sinusoidally temporally modulated in-phase (36 Hz; 2.5 - 50% Michelson). The stimuli were presented in 1 s epochs separated by a 1 ms blank interval and repeated 160 times (160.16 s duration) during the recording of the continuous flicker ERG from the right eye using DTL fiber electrodes. After artefact rejection, the ERG signal was extracted using Fourier methods in each of the 1 s epochs where a stimulus was presented. The signal processing allows for computation of the intrinsic noise distribution in addition to the signal to noise (SNR) ratio. Results We provide the initial report that the ERG intrinsic noise distribution is independent of stimulus contrast whereas SNR decreases linearly with decreasing contrast until the noise limit at ~2.5%. The 1ms blank intervals between epochs de-correlated the ERG signal at the line frequency (50 Hz) and thus increased the SNR of the averaged response. We confirm that response amplitude increases linearly with stimulus contrast. The phase response shows a shallow positive relationship with stimulus contrast. Conclusions This new technique will enable recording of intrinsic noise in ERG signals above and below perceptual visual threshold and is suitable for measurement of continuous rod and cone ERGs across a range of temporal frequencies, and post-receptoral processing in the primary retinogeniculate pathways at low stimulus contrasts. The intrinsic noise distribution may have application as a biomarker for detecting changes in disease progression or treatment efficacy.
Resumo:
This work reports the effect of seed nanoparticle size and concentration effects on heterogeneous crystal nucleation and growth in colloidal suspensions. We examined these effects in the Au nanoparticle-seeded growth of Au-ZnO hetero-nanocrystals under synthesis conditions that generate hexagonal, cone-shaped ZnO nanocrystals. It was observed that small (~ 4 nm) Au seed nanoparticles form one-to-one Au-ZnO hetero dimers and that Au nanoparticle seeds of this size can also act as crystallization ‘catalysts’ that readily promote the nucleation and growth of ZnO nanocrystals. Larger seed nanoparticles (~9 nm, ~ 11 nm) provided multiple, stable ZnO-nucleation sites, generating multi-crystalline hetero trimers, tetramers and oligomers.
Resumo:
In this paper, we present a machine learning approach to measure the visual quality of JPEG-coded images. The features for predicting the perceived image quality are extracted by considering key human visual sensitivity (HVS) factors such as edge amplitude, edge length, background activity and background luminance. Image quality assessment involves estimating the functional relationship between HVS features and subjective test scores. The quality of the compressed images are obtained without referring to their original images ('No Reference' metric). Here, the problem of quality estimation is transformed to a classification problem and solved using extreme learning machine (ELM) algorithm. In ELM, the input weights and the bias values are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for classification problems with imbalance in the number of samples per quality class depends critically on the input weights and the bias values. Hence, we propose two schemes, namely the k-fold selection scheme (KS-ELM) and the real-coded genetic algorithm (RCGA-ELM) to select the input weights and the bias values such that the generalization performance of the classifier is a maximum. Results indicate that the proposed schemes significantly improve the performance of ELM classifier under imbalance condition for image quality assessment. The experimental results prove that the estimated visual quality of the proposed RCGA-ELM emulates the mean opinion score very well. The experimental results are compared with the existing JPEG no-reference image quality metric and full-reference structural similarity image quality metric.
Resumo:
Estimating the economic burden of injuries is important for setting priorities, allocating scarce health resources and planning cost-effective prevention activities. As a metric of burden, costs account for multiple injury consequences—death, severity, disability, body region, nature of injury—in a single unit of measurement. In a 1989 landmark report to the US Congress, Rice et al1 estimated the lifetime costs of injuries in the USA in 1985. By 2000, the epidemiology and burden of injuries had changed enough that the US Congress mandated an update, resulting in a book on the incidence and economic burden of injury in the USA.2 To make these findings more accessible to the larger realm of scientists and practitioners and to provide a template for conducting the same economic burden analyses in other countries and settings, a summary3 was published in Injury Prevention. Corso et al reported that, between 1985 and 2000, injury rates declined roughly 15%. The estimated lifetime cost of these injuries declined 20%, totalling US$406 billion, including US$80 billion in medical costs and US$326 billion in lost productivity. While incidence reflects problem size, the relative burden of injury is better expressed using costs.
Resumo:
Ship seakeeping operability refers to the quantification of motion performance in waves relative to mission requirements. This is used to make decisions about preferred vessel designs, but it can also be used as comprehensive assessment of the benefits of ship-motion-control systems. Traditionally, operability computation aggregates statistics of motion computed over over the envelope of likely environmental conditions in order to determine a coefficient in the range from 0 to 1 called operability. When used for assessment of motion-control systems, the increase of operability is taken as the key performance indicator. The operability coefficient is often given the interpretation of the percentage of time operable. This paper considers an alternative probabilistic approach to this traditional computation of operability. It characterises operability not as a number to which a frequency interpretation is attached, but as a hypothesis that a vessel will attain the desired performance in one mission considering the envelope of likely operational conditions. This enables the use of Bayesian theory to compute the probability of that this hypothesis is true conditional on data from simulations. Thus, the metric considered is the probability of operability. This formulation not only adheres to recent developments in reliability and risk analysis, but also allows incorporating into the analysis more accurate descriptions of ship-motion-control systems since the analysis is not limited to linear ship responses in the frequency domain. The paper also discusses an extension of the approach to the case of assessment of increased levels of autonomy for unmanned marine craft.