890 resultados para parametric resonances
Resumo:
Background: Foot ulcers are a frequent reason for diabetes-related hospitalisation. Clinical training is known to have a beneficial impact on foot ulcer outcomes. Clinical training using simulation techniques has rarely been used in the management of diabetes-related foot complications or chronic wounds. Simulation can be defined as a device or environment that attempts to replicate the real world. The few non-web-based foot-related simulation courses have focused solely on training for a single skill or “part task” (for example, practicing ingrown toenail procedures on models). This pilot study aimed to primarily investigate the effect of a training program using multiple methods of simulation on participants’ clinical confidence in the management of foot ulcers. Methods: Sixteen podiatrists participated in a two-day Foot Ulcer Simulation Training (FUST) course. The course included pre-requisite web-based learning modules, practicing individual foot ulcer management part tasks (for example, debriding a model foot ulcer), and participating in replicated clinical consultation scenarios (for example, treating a standardised patient (actor) with a model foot ulcer). The primary outcome measure of the course was participants’ pre- and post completion of confidence surveys, using a five-point Likert scale (1 = Unacceptable-5 = Proficient). Participants’ knowledge, satisfaction and their perception of the relevance and fidelity (realism) of a range of course elements were also investigated. Parametric statistics were used to analyse the data. Pearson’s r was used for correlation, ANOVA for testing the differences between groups, and a paired-sample t-test to determine the significance between pre- and post-workshop scores. A minimum significance level of p < 0.05 was used. Results: An overall 42% improvement in clinical confidence was observed following completion of FUST (mean scores 3.10 compared to 4.40, p < 0.05). The lack of an overall significant change in knowledge scores reflected the participant populations’ high baseline knowledge and pre-requisite completion of web-based modules. Satisfaction, relevance and fidelity of all course elements were rated highly. Conclusions: This pilot study suggests simulation training programs can improve participants’ clinical confidence in the management of foot ulcers. The approach has the potential to enhance clinical training in diabetes-related foot complications and chronic wounds in general.
Resumo:
Nano silicon is widely used as the essential element of complementary metal–oxide–semiconductor (CMOS) and solar cells. It is recognized that today, large portion of world economy is built on electronics products and related services. Due to the accessible fossil fuel running out quickly, there are increasing numbers of researches on the nano silicon solar cells. The further improvement of higher performance nano silicon components requires characterizing the material properties of nano silicon. Specially, when the manufacturing process scales down to the nano level, the advanced components become more and more sensitive to the various defects induced by the manufacturing process. It is known that defects in mono-crystalline silicon have significant influence on its properties under nanoindentation. However, the cost involved in the practical nanoindentation as well as the complexity of preparing the specimen with controlled defects slow down the further research on mechanical characterization of defected silicon by experiment. Therefore, in current study, the molecular dynamics (MD) simulations are employed to investigate the mono-crystalline silicon properties with different pre-existing defects, especially cavities, under nanoindentation. Parametric studies including specimen size and loading rate, are firstly conducted to optimize computational efficiency. The optimized testing parameters are utilized for all simulation in defects study. Based on the validated model, different pre-existing defects are introduced to the silicon substrate, and then a group of nanoindentation simulations of these defected substrates are carried out. The simulation results are carefully investigated and compared with the perfect Silicon substrate which used as benchmark. It is found that pre-existing cavities in the silicon substrate obviously influence the mechanical properties. Furthermore, pre-existing cavities can absorb part of the strain energy during loading, and then release during unloading, which possibly causes less plastic deformation to the substrate. However, when the pre-existing cavities is close enough to the deformation zone or big enough to exceed the bearable stress of the crystal structure around the spherical cavity, the larger plastic deformation occurs which leads the collapse of the structure. Meanwhile, the influence exerted on the mechanical properties of silicon substrate depends on the location and size of the cavity. Substrate with larger cavity size or closer cavity position to the top surface, usually exhibits larger reduction on Young’s modulus and hardness.
Resumo:
The rank transform is one non-parametric transform which has been applied to the stereo matching problem The advantages of this transform include its invariance to radio metric distortion and its amenability to hardware implementation. This paper describes the derivation of the rank constraint for matching using the rank transform Previous work has shown that this constraint was capable of resolving ambiguous matches thereby improving match reliability A new matching algorithm incorporating this constraint was also proposed. This paper extends on this previous work by proposing a matching algorithm which uses a dimensional match surface in which the match score is computed for every possible template and match window combination. The principal advantage of this algorithm is that the use of the match surface enforces the left�right consistency and uniqueness constraints thus improving the algorithms ability to remove invalid matches Experimental results for a number of test stereo pairs show that the new algorithm is capable of identifying and removing a large number of in incorrect matches particularly in the case of occlusions
Resumo:
The rank transform is a non-parametric technique which has been recently proposed for the stereo matching problem. The motivation behind its application to the matching problem is its invariance to certain types of image distortion and noise, as well as its amenability to real-time implementation. This paper derives an analytic expression for the process of matching using the rank transform, and then goes on to derive one constraint which must be satisfied for a correct match. This has been dubbed the rank order constraint or simply the rank constraint. Experimental work has shown that this constraint is capable of resolving ambiguous matches, thereby improving matching reliability. This constraint was incorporated into a new algorithm for matching using the rank transform. This modified algorithm resulted in an increased proportion of correct matches, for all test imagery used.
Resumo:
The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. The objective is to produce a stereo vision sensor suited to close-range scenes consisting primarily of rocks. This sensor should be able to produce a dense depth map within real-time constraints. Speed and robustness are of foremost importance for this investigation. A number of area based matching metrics have been implemented, including the SAD, SSD, NCC, and their zero-meaned versions. The NCC and the zero meaned SAD and SSD were found to produce the disparity maps with the highest proportion of valid matches. The plain SAD and SSD were the least computationally expensive, due to all their operations taking place in integer arithmetic, however, they were extremely sensitive to radiometric distortion. Non-parametric techniques for matching, in particular, the rank and the census transform, have also been investigated. The rank and census transforms were found to be robust with respect to radiometric distortion, as well as being able to produce disparity maps with a high proportion of valid matches. An additional advantage of both the rank and the census transform is their amenability to fast hardware implementation.
Resumo:
Traditional area-based matching techniques make use of similarity metrics such as the Sum of Absolute Differences(SAD), Sum of Squared Differences (SSD) and Normalised Cross Correlation (NCC). Non-parametric matching algorithms such as the rank and census rely on the relative ordering of pixel values rather than the pixels themselves as a similarity measure. Both traditional area-based and non-parametric stereo matching techniques have an algorithmic structure which is amenable to fast hardware realisation. This investigation undertakes a performance assessment of these two families of algorithms for robustness to radiometric distortion and random noise. A generic implementation framework is presented for the stereo matching problem and the relative hardware requirements for the various metrics investigated.
Resumo:
The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This paper evaluates a number of matching techniques for possible use in a stereo vision sensor for mining automation applications. Area-based techniques have been investigated because they have the potential to yield dense maps, are amenable to fast hardware implementation, and are suited to textured scenes. In addition, two non-parametric transforms, namely, the rank and census, have been investigated. Matching algorithms using these transforms were found to have a number of clear advantages, including reliability in the presence of radiometric distortion, low computational complexity, and amenability to hardware implementation.
Resumo:
Fruit drying is a process of removing moisture to preserve fruits by preventing microbial spoilage. It increases shelf life, reduce weight and volume thus minimize packing, storage, and transportation cost and enable storage of food under ambient environment. But, it is a complex process which involves combination of heat and mass transfer and physical property change and shrinkage of the material. In this background, the aim of this paper to develop a mathematical model to simulate coupled heat and mass transfer during convective drying of fruit. This model can be used predict the temperature and moisture distribution inside the fruits during drying. Two models were developed considering shrinkage dependent and temperature dependent moisture diffusivity and the results were compared. The governing equations of heat and mass transfer are solved and a parametric study has been done with Comsol Multiphysics 4.3. The predicted results were validated with experimental data.
Resumo:
The Attentional Control Theory (ACT) proposes that high-anxious individuals maintain performance effectiveness (accuracy) at the expense of processing efficiency (response time), in particular, the two central executive functions of inhibition and shifting. In contrast, research has generally failed to consider the third executive function which relates to the function of updating. In the current study, seventy-five participants completed the Parametric Go/No-Go and n-back tasks, as well as the State-Trait Anxiety Inventory in order to explore the effects of anxiety on attention. Results indicated that anxiety lead to decay in processing efficiency, but not in performance effectiveness, across all three Central Executive functions (inhibition, set-shifting and updating). Interestingly, participants with high levels of trait anxiety also exhibited impaired performance effectiveness on the n-back task designed to measure the updating function. Findings are discussed in relation to developing a new model of ACT that also includes the role of preattentive processes and dual-task coordination when exploring the effects of anxiety on task performance.
Resumo:
Abstract: LiteSteel beam (LSB) is a new cold-formed steel hollow flange channel section produced using a simultaneous cold-forming and dual electric resistance welding process. It is commonly used as floor joists and bearers with web openings in residential, industrial and commercial buildings. Their shear strengths are considerably reduced when web openings are included for the purpose of locating building services. A cost effective method of eliminating the detrimental effects of a large web opening is to attach suitable stiffeners around the web openings of LSBs. Experimental and numerical studies were undertaken to investigate the shear behaviour and strength of LSBs with circular web openings reinforced using plate, stud, transverse and sleeve stiffeners with varying sizes and thicknesses. Both welding and varying screw-fastening arrangements were used to attach these stiffeners to the web of LSBs. Finite element models of LSBs with stiffened web openings in shear were developed to simulate their shear behaviour and strength of LSBs. They were then validated by comparing the results with experimental test results and used in a detailed parametric study. These studies have shown that plate stiffeners were the most suitable, however, their use based on the current American standards was found to be inadequate. Suitable screw-fastened plate stiffener arrangements with optimum thicknesses have been proposed for LSBs with web openings to restore their original shear capacity. This paper presents the details of the numerical study and the results.
Resumo:
New materials technology has provided the potential for the development of an innovative Hybrid Composite Floor Plate System (HCFPS) with many desirable properties, such as light weight, easy to construct, economical, demountable, recyclable and reusable. Component materials of HCFPS include a central Polyurethane (PU) core, outer layers of Glass-fibre Reinforced Cement (GRC) and steel laminates at tensile regions. HCFPS is configured such that the positive inherent properties of individual component materials are combined to offset any weakness and achieve optimum performance. Research has been carried out using extensive Finite Element (FE) computer simulations supported by experimental testing. Both the strength and serviceability requirements have been established for this lightweight floor plate system. This paper presents some of the research towards the development of HCFPS along with a parametric study to select suitable span lengths.
Resumo:
Cold-formed steel beams are increasingly used as floor joists and bearers in buildings and often their behaviour and moment capacities are influenced by lateral-torsional buckling. With increasing usage of cold-formed steel beams their fire safety design has become an important issue. Fire design rules are commonly based on past research on hot-rolled steel beams. Hence a detailed parametric study was undertaken using validated finite element models to investigate the lateral-torsional buckling behaviour of simply supported cold-formed steel lipped channel beams subjected to uniform bending at uniform elevated temperatures. The moment capacity results were compared with the predictions from the available ambient temperature and fire design rules and suitable recommendations were made. European fire design rules were found to be over-conservative while the ambient temperature design rules could not be used based on single buckling curve. Hence a new design method was proposed that includes the important non-linear stress-strain characteristics observed for cold-formed steels at elevated temperatures. Comparison with numerical moment capacities demonstrated the accuracy of the new design method. This paper presents the details of the parametric study, comparisons with current design rules and the new design rules proposed in this research for lateral-torsional buckling of cold-formed steel lipped channel beams at elevated temperatures.
Resumo:
Numerical investigation on mixed convection of a two-dimensional incompressible laminar flow over a horizontal flat plate with streamwise sinusoidal distribution of surface temperature has been performed for different values of Rayleigh number, Reynolds number and frequency of periodic temperature for constant Prandtl number and amplitude of periodic temperature. Finite element method adapted to rectangular non-uniform mesh elements by a non-linear parametric solution algorithm basis numerical scheme has been employed. The investigating parameters are the Rayleigh number, the Reynolds number and frequency of periodic temperature. The effect of variation of individual investigating parameters on mixed convection flow characteristics has been studied to observe the hydrodynamic and thermal behavior for while keeping the other parameters constant. The fluid considered in this study is air with Prandtl number 0.72. The results are obtained for the Rayleigh number range of 102 to 104, Reynolds number ranging from 1 to 100 and the frequency of periodic temperature from 1 to 5. Isotherms, streamlines, average and local Nusselt numbers are presented to show the effect of the different values of aforementioned investigating parameters on fluid flow and heat transfer.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Load modeling plays an important role in power system dynamic stability assessment. One of the widely used methods in assessing load model impact on system dynamic response is through parametric sensitivity analysis. Load ranking provides an effective measure of such impact. Traditionally, load ranking is based on either static or dynamic load model alone. In this paper, composite load model based load ranking framework is proposed. It enables comprehensive investigation into load modeling impacts on system stability considering the dynamic interactions between load and system dynamics. The impact of load composition on the overall sensitivity and therefore on ranking of the load is also investigated. Dynamic simulations are performed to further elucidate the results obtained through sensitivity based load ranking approach.