955 resultados para APPLIED LOAD
Resumo:
Load-induced extravascular fluid flow has been postulated to play a role in mechanotransduction of physiological loads at the cellular level. Furthermore, the displaced fluid serves as a carrier for metabolites, nutrients, mineral precursors and osteotropic agents important for cellular activity. We hypothesise that load-induced fluid flow enhances the transport of these key substances, thus helping to regulate cellular activity associated with processes of functional adaptation and remodelling. To test this hypothesis, molecular tracer methods developed previously by our group were applied in vivo to observe and quantify the effects of load-induced fluid flow under four-point-bending loads. Preterminal tracer transport studies were carried out on 24 skeletally mature Sprague Dawley rats. Mechanical loading enhanced the transport of both small- and larger-molecular-mass tracers within the bony tissue of the tibial mid-diaphysis. Mechanical loading showed a highly significant effect on the number of periosteocytic spaces exhibiting tracer within the cross section of each bone. For all loading rates studied, the concentration of Procion Red tracer was consistently higher in the tibia subjected to pure bending loads than in the unloaded, contralateral tibia, Furthermore, the enhancement of transport was highly site-specific. In bones subjected to pure bending loads, a greater number of periosteocytic spaces exhibited the presence of tracer in the tension band of the cross section than in the compression band; this may reflect the higher strains induced in the tension band compared with the compression band within the mid-diaphysis of the rat tibia. Regardless of loading mode, the mean difference between the loaded side and the unloaded contralateral control side decreased with increasing loading frequency. Whether this reflects the length of exposure to the tracer or specific frequency effects cannot be determined by this set of experiments. These in vivo experimental results corroborate those of previous ex vivo and in vitro studies, Strain-related differences in tracer distribution provide support for the hypothesis that load-induced fluid flow plays a regulatory role in processes associated with functional adaptation.
Resumo:
A system for the NDI' testing of the integrity of conposite materials and of adhesive bonds has been developed to meet industrial requirements. The vibration techniques used were found to be applicable to the development of fluid measuring transducers. The vibrational spectra of thin rectangular bars were used for the NDT work. A machined cut in a bar had a significant effect on the spectrum but a genuine crack gave an unambiguous response at high amplitudes. This was the generation of fretting crack noise at frequencies far above that of the drive. A specially designed vibrational decrement meter which, in effect, measures mechanical energy loss enabled a numerical classification of material adhesion to be obtained. This was used to study bars which had been flame or plasma sprayed with a variety of materials. It has become a useful tool in optimising coating methods. A direct industrial application was to classify piston rings of high performance I.C. engines. Each consists of a cast iron ring with a channel into which molybdenum, a good bearing surface, is sprayed. The NDT classification agreed quite well with the destructive test normally used. The techniques and equipment used for the NOT work were applied to the development of the tuning fork transducers investigated by Hassan into commercial density and viscosity devices. Using narrowly spaced, large area tines a thin lamina of fluid is trapped between them. It stores a large fraction of the vibrational energy which, acting as an inertia load reduces the frequency. Magnetostrictive and piezoelectric effects together or in combination enable the fork to be operated through a flange. This allows it to be used in pipeline or 'dipstick' applications. Using a different tine geometry the viscosity loading can be predoninant. This as well as the signal decrement of the density transducer makes a practical viscometer.
Resumo:
Several axi-symmetric EN3B steel components differing in shape and size were forged on a 100 ton joint knuckle press. A load cell fitted under the lower die inserts recorded the total deformation forces. Job parameters were measured off the billets and the forged parts. Slug temperatures were varied and two lubricants - aqueous colloidal graphite and oil - were used. An industrial study was also conducted to check the results of the laboratory experiments. Loads were measured (with calibrated extensometers attached to the press frames) when adequately heated mild steel slugs were being forged in finishing dies. Geometric parameters relating to the jobs and the dies were obtained from works drawings. All the variables considered in the laboratory study could not, however, be investigated without disrupting production. In spite of this obvious limitation, the study confirmed that parting area is the most significant geometric factor influencing the forging load. Multiple regression analyses of the laboratory and industrial results showed that die loads increase significantly with the weights and parting areas of press forged components, and with the width to thickness ratios of the flashes formed, but diminish with increasing slug temperatures and higher billet diameter to height ratios. The analyses also showed that more complicated parts require greater loads to forge them. Die stresses, due to applied axial loads, were investigated by the photoelastic method. The three dimensional frozen stress technique was employed. Model dies were machined from cast araldite cylinders, and the slug material was simulated with plasticene. Test samples were cut from the centres of the dies after the stress freezing. Examination of the samples, and subsequent calculations, showed that the highest stresses were developed in die outer corners. This observation partly explains why corner cracking occurs frequently in industrial forging dies. Investigation of die contact during the forging operation revealed the development of very high stresses.
Resumo:
Multi-agent algorithms inspired by the division of labour in social insects and by markets, are applied to a constrained problem of distributed task allocation. The efficiency (average number of tasks performed), the flexibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved efficiency and robustness. We employ nature inspired particle swarm optimisation to obtain optimised parameters for all algorithms in a range of representative environments. Although results are obtained for large population sizes to avoid finite size effects, the influence of population size on the performance is also analysed. From a theoretical point of view, we analyse the causes of efficiency loss, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.
Resumo:
In this paper, we discuss some practical implications for implementing adaptable network algorithms applied to non-stationary time series problems. Two real world data sets, containing electricity load demands and foreign exchange market prices, are used to test several different methods, ranging from linear models with fixed parameters, to non-linear models which adapt both parameters and model order on-line. Training with the extended Kalman filter, we demonstrate that the dynamic model-order increment procedure of the resource allocating RBF network (RAN) is highly sensitive to the parameters of the novelty criterion. We investigate the use of system noise for increasing the plasticity of the Kalman filter training algorithm, and discuss the consequences for on-line model order selection. The results of our experiments show that there are advantages to be gained in tracking real world non-stationary data through the use of more complex adaptive models.
Resumo:
A chirped moiré fiber Bragg grating has been demonstrated to be capable of measuring the magnitude, position, and footprint of a transverse load. The device provides an average spatial resolution of 164 μm and has a load accuracy of 0.15 N/mm, or 50 με. © 2004 Optical Society of America.
Resumo:
We demonstrate the sensitivity of Bragg gratings in a multicore fiber to transverse load. The Bragg peaks are split because of stress-induced birefringence, the magnitude of which depends upon the load and grating position relative to the load axis. Experiments show that a set of gratings in a four-core fiber can measure a load axis angle to ±5° and a load magnitude to ±15 N m-1 up to 2500 N m-1. We consider alternative designs of multicore fiber for optimal load sensing and compare experimental and modeled data. © 2005 Optical Society of America.
Resumo:
A new experimental technique is presented for making measurements of biaxial residual stress using load and depth sensing indentation (nanoindentation). The technique is based on spherical indentation, which, in certain deformation regimes, can be much more sensitive to residual stress than indentation with sharp pyramidal indenters like the Berkovich. Two different methods of analysis were developed: one requiring an independent measure of the material's yield strength and the other a reference specimen in the unstressed state or other known reference condition. Experiments conducted on aluminum alloys to which controlled biaxial bending stresses were applied showed that the methods are capable of measuring the residual stress to within 10-20% of the specimen yield stress. Because the methods do not require imaging of the hardness impressions, they are potentially useful for making localized measurements of residual stress, as in thin films or small volumes, or for characterization of point-to-point spatial variations of the surface stress.
Resumo:
This article reports on an investigationwith first year undergraduate ProductDesign and Management students within a School of Engineering and Applied Science. The students at the time of this investigation had studied fundamental engineering science and mathematics for one semester. The students were given an open ended, ill-formed problem which involved designing a simple bridge to cross a river.They were given a talk on problemsolving and given a rubric to follow, if they chose to do so.They were not given any formulae or procedures needed in order to resolve the problem. In theory, they possessed the knowledge to ask the right questions in order tomake assumptions but, in practice, it turned out they were unable to link their a priori knowledge to resolve this problem. They were able to solve simple beam problems when given closed questions. The results show they were unable to visualize a simple bridge as an augmented beam problem and ask pertinent questions and hence formulate appropriate assumptions in order to offer resolutions.
Resumo:
The current study applied classic cognitive capacity models to examine the effect of cognitive load on deception. The study also examined whether the manipulation of cognitive load would result in the magnification of differences between liars and truth-tellers. In the first study, 87 participants engaged in videotaped interviews while being either deceptive or truthful about a target event. Some participants engaged in a concurrent secondary task while being interviewed. Performance on the secondary task was measured. As expected, truth tellers performed better on secondary task items than liars as evidenced by higher accuracy rates. These results confirm the long held assumption that being deceptive is more cognitively demanding than being truthful. In the second part of the study, the videotaped interviews of both liars and truth-tellers were shown to 69 observers. After watching the interviews, observers were asked to make a veracity judgment for each participant. Observers made more accurate veracity judgments when viewing participants who engaged in a concurrent secondary task than when viewing those who did not. Observers also indicated that participants who engaged in a concurrent secondary task appeared to think harder than participants who did not. This study provides evidence that engaging in deception is more cognitively demanding than telling the truth. As hypothesized, having participants engage in a concurrent secondary task led to the magnification of differences between liars and truth tellers. This magnification of differences led to more accurate veracity rates in a second group of observers. The implications for deception detection are discussed.
Resumo:
The current study applied classic cognitive capacity models to examine the effect of cognitive load on deception. The study also examined whether the manipulation of cognitive load would result in the magnification of differences between liars and truth-tellers. In the first study, 87 participants engaged in videotaped interviews while being either deceptive or truthful about a target event. Some participants engaged in a concurrent secondary task while being interviewed. Performance on the secondary task was measured. As expected, truth tellers performed better on secondary task items than liars as evidenced by higher accuracy rates. These results confirm the long held assumption that being deceptive is more cognitively demanding than being truthful. In the second part of the study, the videotaped interviews of both liars and truth-tellers were shown to 69 observers. After watching the interviews, observers were asked to make a veracity judgment for each participant. Observers made more accurate veracity judgments when viewing participants who engaged in a concurrent secondary task than when viewing those who did not. Observers also indicated that participants who engaged in a concurrent secondary task appeared to think harder than participants who did not. This study provides evidence that engaging in deception is more cognitively demanding than telling the truth. As hypothesized, having participants engage in a concurrent secondary task led to the magnification of differences between liars and truth tellers. This magnification of differences led to more accurate veracity rates in a second group of observers. The implications for deception detection are discussed.
Resumo:
This chapter describes a parallel optimization technique that incorporates a distributed load-balancing algorithm and provides an extremely fast solution to the problem of load-balancing adaptive unstructured meshes. Moreover, a parallel graph contraction technique can be employed to enhance the partition quality and the resulting strategy outperforms or matches results from existing state-of-the-art static mesh partitioning algorithms. The strategy can also be applied to static partitioning problems. Dynamic procedures have been found to be much faster than static techniques, to provide partitions of similar or higher quality and, in comparison, involve the migration of a fraction of the data. The method employs a new iterative optimization technique that balances the workload and attempts to minimize the interprocessor communications overhead. Experiments on a series of adaptively refined meshes indicate that the algorithm provides partitions of an equivalent or higher quality to static partitioners (which do not reuse the existing partition) and much more quickly. The dynamic evolution of load has three major influences on possible partitioning techniques; cost, reuse, and parallelism. The unstructured mesh may be modified every few time-steps and so the load-balancing must have a low cost relative to that of the solution algorithm in between remeshing.
Resumo:
Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.
Resumo:
In this study, magnesium is alloyed with varying amounts of the ferromagnetic alloying element cobalt in order to obtain lightweight load-sensitive materials with sensory properties which allow an online-monitoring of mechanical forces applied to components made from Mg-Co alloys. An optimized casting process with the use of extruded Mg-Co powder rods is utilized which enables the production of magnetic magnesium alloys with a reproducible Co concentration. The efficiency of the casting process is confirmed by SEM analyses. Microstructures and Co-rich precipitations of various Mg-Co alloys are investigated by means of EDS and XRD analyses. The Mg-Co alloys' mechanical strengths are determined by tensile tests. Magnetic properties of the Mg-Co sensor alloys depending on the cobalt content and the acting mechanical load are measured utilizing the harmonic analysis of eddy-current signals. Within the scope of this work, the influence of the element cobalt on magnesium is investigated in detail and an optimal cobalt concentration is defined based on the performed examinations.
Resumo:
This thesis describes the development and correlation of a thermal model that forms the foundation of a thermal capacitance spacecraft propellant load estimator. Specific details of creating the thermal model for the diaphragm propellant tank used on NASA’s Magnetospheric Multiscale spacecraft using ANSYS and the correlation process implemented are presented. The thermal model was correlated to within +/- 3 Celsius of the thermal vacuum test data, and was determined sufficient to make future propellant predictions on MMS. The model was also found to be relatively sensitive to uncertainties in applied heat flux and mass knowledge of the tank. More work is needed to improve temperature predictions in the upper hemisphere of the propellant tank where predictions were found to be 2-2.5 Celsius lower than the test data. A road map for applying the model to predict propellant loads on the actual MMS spacecraft in 2017-2018 is also presented.