357 resultados para Proximal Point Algorithm
Resumo:
To describe barefoot, shod and in-shoe kinematics during stance phase of walking gait in a normal arched adult population. An equal sample of males and females (n = 24) was recruited. In order to quantify the effect of footwear independent of technical design features, an ASICS shoe (Onitsuka Tiger-Mexico 66, Japan) was used in this study. Markers were applied to three conditions; barefoot, shod, and in-shoe. The calibration markers were used to define static pose. The order of testing was randomised. Participants completed five trials in each condition. Kinematic data were captured using a 12 camera VICON MX40 motion capture system at 100 Hz and processed in Visual3D. A previously developed model was used to describe joint angles [1]. A univariate two-way ANOVA was used to identify any differences between the pairs of conditions. Post-hoc Sheffé tests were used to further interrogate the data for differences. At peak hallux dorsiflexion (Figure 1), during propulsion, the metatarsophalangeal joint (MPTJ) was significantly more dorsiflexed in the barefoot condition compared to the shod condition (p = 0.004). At the same gait event, the tibiocalcaneal joint (TCJ) was significantly more plantarflexed than both the shod and in-shoe conditions (p < 0.001), and the tarsometatarsal joint (TMTJ) was significantly less dorsiflexed in the barefoot condition compared to the shod and in-shoe conditions (p < 0.001). The findings of the current study demonstrate that footwear has significant effects on sagittal plane MPTJ joint dorsiflexion at peak hallux dorsiflexion, which results in compensations at proximal foot joints.
Resumo:
Purpose The use of intravascular devices is associated with a number of potential complications. Despite a number of evidence-based clinical guidelines in this area, there continues to be nursing practice discrepancies. This study aims to examine nursing practice in a cancer care setting to identify nursing practice and areas for improvement respective to best available evidence. Methods A point prevalence survey was undertaken in a tertiary cancer care centre in Queensland, Australia. On a randomly selected day, four nurses assessed intravascular device related nursing practices and collected data using a standardized survey tool. Results 58 inpatients (100%) were assessed. Forty-eight (83%) had a device in situ, comprising 14 Peripheral Intravenous Catheters (29.2%), 14 Peripherally Inserted Central Catheters (29.2%), 14 Hickman catheters (29.2%) and six Port-a-Caths (12.4%). Suboptimal outcomes such as incidences of local site complications, incorrect/inadequate documentation, lack of flushing orders, and unclean/non intact dressings were observed. Conclusions This study has highlighted a number of intravascular device related nursing practice discrepancies compared with current hospital policy. Education and other implementation strategies can be applied to improve nursing practice. Following education strategies, it will be valuable to repeat this survey on a regular basis to provide feedback to nursing staff and implement strategies to improve practice. More research is required to provide evidence to clinical practice with regards to intravascular device related consumables, flushing technique and protocols.
Resumo:
A major challenge for robot localization and mapping systems is maintaining reliable operation in a changing environment. Vision-based systems in particular are susceptible to changes in illumination and weather, and the same location at another time of day may appear radically different to a system using a feature-based visual localization system. One approach for mapping changing environments is to create and maintain maps that contain multiple representations of each physical location in a topological framework or manifold. However, this requires the system to be able to correctly link two or more appearance representations to the same spatial location, even though the representations may appear quite dissimilar. This paper proposes a method of linking visual representations from the same location without requiring a visual match, thereby allowing vision-based localization systems to create multiple appearance representations of physical locations. The most likely position on the robot path is determined using particle filter methods based on dead reckoning data and recent visual loop closures. In order to avoid erroneous loop closures, the odometry-based inferences are only accepted when the inferred path's end point is confirmed as correct by the visual matching system. Algorithm performance is demonstrated using an indoor robot dataset and a large outdoor camera dataset.
Resumo:
Cyclostationary models for the diagnostic signals measured on faulty rotating machineries have proved to be successful in many laboratory tests and industrial applications. The squared envelope spectrum has been pointed out as the most efficient indicator for the assessment of second order cyclostationary symptoms of damages, which are typical, for instance, of rolling element bearing faults. In an attempt to foster the spread of rotating machinery diagnostics, the current trend in the field is to reach higher levels of automation of the condition monitoring systems. For this purpose, statistical tests for the presence of cyclostationarity have been proposed during the last years. The statistical thresholds proposed in the past for the identification of cyclostationary components have been obtained under the hypothesis of having a white noise signal when the component is healthy. This need, coupled with the non-white nature of the real signals implies the necessity of pre-whitening or filtering the signal in optimal narrow-bands, increasing the complexity of the algorithm and the risk of losing diagnostic information or introducing biases on the result. In this paper, the authors introduce an original analytical derivation of the statistical tests for cyclostationarity in the squared envelope spectrum, dropping the hypothesis of white noise from the beginning. The effect of first order and second order cyclostationary components on the distribution of the squared envelope spectrum will be quantified and the effectiveness of the newly proposed threshold verified, providing a sound theoretical basis and a practical starting point for efficient automated diagnostics of machine components such as rolling element bearings. The analytical results will be verified by means of numerical simulations and by using experimental vibration data of rolling element bearings.
Resumo:
Voltage unbalance is a major power quality problem in low voltage residential feeders due to the random location and rating of single-phase rooftop photovoltaic cells (PV). In this paper, two different improvement methods based on the application of series (DVR) and parallel (DSTATCOM) custom power devices are investigated to improve the voltage unbalance problem in these feeders. First, based on the load flow analysis carried out in MATLAB, the effectiveness of these two custom power devices is studied vis-à-vis the voltage unbalance reduction in urban and semi-urban/rural feeders containing rooftop PVs. Their effectiveness is studied from the installation location and rating points of view. Later, a Monte Carlo based stochastic analysis is carried out to investigate their efficacy for different uncertainties of load and PV rating and location in the network. After the numerical analyses, a converter topology and control algorithm is proposed for the DSTATCOM and DVR for balancing the network voltage at their point of common coupling. A state feedback control, based on pole-shift technique, is developed to regulate the voltage in the output of the DSTATCOM and DVR converters such that the voltage balancing is achieved in the network. The dynamic feasibility of voltage unbalance and profile improvement in LV feeders, by the proposed structure and control algorithm for the DSTATCOM and DVR, is verified through detailed PSCAD/EMTDC simulations.
Resumo:
Railway crew scheduling problem is the process of allocating train services to the crew duties based on the published train timetable while satisfying operational and contractual requirements. The problem is restricted by many constraints and it belongs to the class of NP-hard. In this paper, we develop a mathematical model for railway crew scheduling with the aim of minimising the number of crew duties by reducing idle transition times. Duties are generated by arranging scheduled trips over a set of duties and sequentially ordering the set of trips within each of duties. The optimisation model includes the time period of relief opportunities within which a train crew can be relieved at any relief point. Existing models and algorithms usually only consider relieving a crew at the beginning of the interval of relief opportunities which may be impractical. This model involves a large number of decision variables and constraints, and therefore a hybrid constructive heuristic with the simulated annealing search algorithm is applied to yield an optimal or near-optimal schedule. The performance of the proposed algorithms is evaluated by applying computational experiments on randomly generated test instances. The results show that the proposed approaches obtain near-optimal solutions in a reasonable computational time for large-sized problems.
Resumo:
In many active noise control (ANC) applications, an online secondary path modelling method that uses a white noise as a training signal is required. This paper proposes a new feedback ANC system. Here we modified both the FxLMS and the VSS-LMS algorithms to raised noise attenuation and modelling accuracy for the overall system. The proposed algorithm stops injection of the white noise at the optimum point and reactivate the injection during the operation, if needed, to maintain performance of the system. Preventing continuous injection of the white noise increases the performance of the proposed method significantly and makes it more desirable for practical ANC systems. Computer simulation results shown in this paper indicate effectiveness of the proposed method.
Resumo:
Point-to-point speed cameras are a relatively new and innovative technological approach to speed enforcement that is increasingly been used in a number of highly motorised countries. Previous research has provided evidence of the positive impact of this approach on vehicle speeds and crash rates, as well as additional traffic related outcomes such as vehicle emissions and traffic flow. This paper reports on the conclusions and recommendations of a large-scale project involving extensive consultation with international and domestic (Australian) stakeholders to explore the technological, operational, and legislative characteristics associated with the technology. More specifically, this paper provides a number of recommendations for better practice regarding the implementation of point-to-point speed enforcement in the Australian and New Zealand context. The broader implications of the research, as well as directions for future research, are also discussed.
Resumo:
As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.
Resumo:
Currently, the GNSS computing modes are of two classes: network-based data processing and user receiver-based processing. A GNSS reference receiver station essentially contributes raw measurement data in either the RINEX file format or as real-time data streams in the RTCM format. Very little computation is carried out by the reference station. The existing network-based processing modes, regardless of whether they are executed in real-time or post-processed modes, are centralised or sequential. This paper describes a distributed GNSS computing framework that incorporates three GNSS modes: reference station-based, user receiver-based and network-based data processing. Raw data streams from each GNSS reference receiver station are processed in a distributed manner, i.e., either at the station itself or at a hosting data server/processor, to generate station-based solutions, or reference receiver-specific parameters. These may include precise receiver clock, zenith tropospheric delay, differential code biases, ambiguity parameters, ionospheric delays, as well as line-of-sight information such as azimuth and elevation angles. Covariance information for estimated parameters may also be optionally provided. In such a mode the nearby precise point positioning (PPP) or real-time kinematic (RTK) users can directly use the corrections from all or some of the stations for real-time precise positioning via a data server. At the user receiver, PPP and RTK techniques are unified under the same observation models, and the distinction is how the user receiver software deals with corrections from the reference station solutions and the ambiguity estimation in the observation equations. Numerical tests demonstrate good convergence behaviour for differential code bias and ambiguity estimates derived individually with single reference stations. With station-based solutions from three reference stations within distances of 22–103 km the user receiver positioning results, with various schemes, show an accuracy improvement of the proposed station-augmented PPP and ambiguity-fixed PPP solutions with respect to the standard float PPP solutions without station augmentation and ambiguity resolutions. Overall, the proposed reference station-based GNSS computing mode can support PPP and RTK positioning services as a simpler alternative to the existing network-based RTK or regionally augmented PPP systems.
Resumo:
BACKGROUND Tubulointerstitial lesions, characterized by tubular injury, interstitial fibrosis and the appearance of myofibroblasts, are the strongest predictors of the degree and progression of chronic renal failure. These lesions are typically preceded by macrophage infiltration of the tubulointerstitium, raising the possibility that these inflammatory cells promote progressive renal disease through fibrogenic actions on resident tubulointerstitial cells. The aim of the present study, therefore, was to investigate the potentially fibrogenic mechanisms of interleukin-1beta (IL-1beta), a macrophage-derived pro-inflammatory cytokine, on human proximal tubule cells (PTC). METHODS Confluent, quiescent, passage 2 PTC were established in primary culture from histologically normal segments of human renal cortex (N = 11) and then incubated in serum- and hormone-free media supplemented with either IL-1beta (0 to 4 ng/mL) or vehicle (control). RESULTS IL-1beta significantly enhanced fibronectin secretion by up to fourfold in a time- and concentration-dependent fashion. This was accompanied by significant (2.5- to 6-fold) increases in alpha-smooth muscle actin (alpha-SMA) expression, transforming growth factor beta (TGF-beta1) secretion, nitric oxide (NO) production, NO synthase 2 (NOS2) mRNA and lactate dehydrogenase (LDH) release. Cell proliferation was dose-dependently suppressed by IL-1beta. NG-methyl-l-arginine (L-NMMA; 1 mmol/L), a specific inhibitor of NOS, blocked NO production but did not alter basal or IL-1beta-stimulated fibronectin secretion. In contrast, a pan-specific TGF-beta neutralizing antibody significantly blocked the effects of IL-1beta on PTC fibronectin secretion (IL-1beta, 268.1 +/- 30.6 vs. IL-1beta+alphaTGF-beta 157.9 +/- 14.4%, of control values, P < 0.001) and DNA synthesis (IL-1beta 81.0 +/- 6.7% vs. IL-1beta+alphaTGF-beta 93.4 +/- 2.1%, of control values, P < 0.01). CONCLUSION IL-1beta acts on human PTC to suppress cell proliferation, enhance fibronectin production and promote alpha-smooth muscle actin expression. These actions appear to be mediated by a TGF-beta1 dependent mechanism and are independent of nitric oxide release.
Resumo:
To enhance the performance of the k-nearest neighbors approach in forecasting short-term traffic volume, this paper proposed and tested a two-step approach with the ability of forecasting multiple steps. In selecting k-nearest neighbors, a time constraint window is introduced, and then local minima of the distances between the state vectors are ranked to avoid overlappings among candidates. Moreover, to control extreme values’ undesirable impact, a novel algorithm with attractive analytical features is developed based on the principle component. The enhanced KNN method has been evaluated using the field data, and our comparison analysis shows that it outperformed the competing algorithms in most cases.
Resumo:
MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics and it can obtain a better solution in a reasonable time. Furthermore, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement which puts a fixed number of mapper/reducer on each machine. The comparison results show that the computation using our mapper/reducer placement is much cheaper than the computation using the conventional placement while still satisfying the computation deadline.
Resumo:
Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation technology. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches consider the energy consumption by physical machines only, but do not consider the energy consumption in communication network, in a data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement. In our preliminary research, we have proposed a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both physical machines and the communication network in a data center. Aiming at improving the performance and efficiency of the genetic algorithm, this paper presents a hybrid genetic algorithm for the energy-efficient virtual machine placement problem. Experimental results show that the hybrid genetic algorithm significantly outperforms the original genetic algorithm, and that the hybrid genetic algorithm is scalable.
Resumo:
Rigid lenses, which were originally made from glass (between 1888 and 1940) and later from polymethyl methacrylate or silicone acrylate materials, are uncomfortable to wear and are now seldom fitted to new patients. Contact lenses became a popular mode of ophthalmic refractive error correction following the discovery of the first hydrogel material – hydroxyethyl methacrylate – by Czech chemist Otto Wichterle in 1960. To satisfy the requirements for ocular biocompatibility, contact lenses must be transparent and optically stable (for clear vision), have a low elastic modulus (for good comfort), have a hydrophilic surface (for good wettability), and be permeable to certain metabolites, especially oxygen, to allow for normal corneal metabolism and respiration during lens wear. A major breakthrough in respect of the last of these requirements was the development of silicone hydrogel soft lenses in 1999 and techniques for making the surface hydrophilic. The vast majority of contact lenses distributed worldwide are mass-produced using cast molding, although spin casting is also used. These advanced mass-production techniques have facilitated the frequent disposal of contact lenses, leading to improvements in ocular health and fewer complications. More than one-third of all soft contact lenses sold today are designed to be discarded daily (i.e., ‘daily disposable’ lenses).