300 resultados para Machine of 360°


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relationship between coronal knee laxity and the restraining properties of the collateral ligaments remains unknown. This study investigated correlations between the structural properties of the collateral ligaments and stress angles used in computer-assisted total knee arthroplasty (TKA), measured with an optically based navigation system. Ten fresh-frozen cadaveric knees (mean age: 81 ± 11 years) were dissected to leave the menisci, cruciate ligaments, posterior joint capsule and collateral ligaments. The resected femur and tibia were rigidly secured within a test system which permitted kinematic registration of the knee using a commercially available image-free navigation system. Frontal plane knee alignment and varus-valgus stress angles were acquired. The force applied during varus-valgus testing was quantified. Medial and lateral bone-collateral ligament-bone specimens were then prepared, mounted within a uni-axial materials testing machine, and extended to failure. Force and displacement data were used to calculate the principal structural properties of the ligaments. The mean varus laxity was 4 ± 1° and the mean valgus laxity was 4 ± 2°. The corresponding mean manual force applied was 10 ± 3 N and 11 ± 4 N, respectively. While measures of knee laxity were independent of the ultimate tensile strength and stiffness of the collateral ligaments, there was a significant correlation between the force applied during stress testing and the instantaneous stiffness of the medial (r = 0.91, p = 0.001) and lateral (r = 0.68, p = 0.04) collateral ligaments. These findings suggest that clinicians may perceive a rate of change of ligament stiffness as the end-point during assessment of collateral knee laxity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial organisation of proteins according to their function plays an important role in the specificity of their molecular interactions. Emerging proteomics methods seek to assign proteins to sub-cellular locations by partial separation of organelles and computational analysis of protein abundance distributions among partially separated fractions. Such methods permit simultaneous analysis of unpurified organelles and promise proteome-wide localisation in scenarios wherein perturbation may prompt dynamic re-distribution. Resolving organelles that display similar behavior during a protocol designed to provide partial enrichment represents a possible shortcoming. We employ the Localisation of Organelle Proteins by Isotope Tagging (LOPIT) organelle proteomics platform to demonstrate that combining information from distinct separations of the same material can improve organelle resolution and assignment of proteins to sub-cellular locations. Two previously published experiments, whose distinct gradients are alone unable to fully resolve six known protein-organelle groupings, are subjected to a rigorous analysis to assess protein-organelle association via a contemporary pattern recognition algorithm. Upon straightforward combination of single-gradient data, we observe significant improvement in protein-organelle association via both a non-linear support vector machine algorithm and partial least-squares discriminant analysis. The outcome yields suggestions for further improvements to present organelle proteomics platforms, and a robust analytical methodology via which to associate proteins with sub-cellular organelles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper makes a case for thinking about the primary school as a logic machine (apparatus) as a way of thinking about processes of in-school stratification. Firstly we discuss related literature on in-school stratification in primary schools, particularly as it relates to literacy learning. Secondly we explain how school reform can be thought about in terms of the idea of the machine or apparatus. In which case the processes of in-school stratification can be mapped as more than simply concerns about school organisation (such as students grouping) but also involve a politics of truth, played out in each school, that constitutes school culture and what counts as ‘good’ pedagogy. Thirdly, the chapter will focus specifically on research conducted into primary schools in the Northern Suburbs of Adelaide, one of the most educationally disadvantaged regions in Australia, as a case study of the relationship between in-school stratification and the reproduction of inequality. We will draw from more than 20 years of ethnographic work in primary school in the northern suburbs of Adelaide and provide a snapshot of a recent attempt to improve literacy achievement in a few Northern Suburbs public primary schools (SILA project). The SILA project, through diagnostic reviews, has provided a significant analysis of the challenges facing policy and practice in such challenging school contexts that also maps onto existing (inter)national research. These diagnostic reviews said ‘hard things’ that required attention by SILA schools and these included: · an over reliance on whole class, low level, routine tasks and hence a lack of challenge and rigour in the learning tasks offered to students ; · a focus on the 'code breaking' function of language at the expense of richer conceptualisations of literacy that might guide teachers’ understanding of challenging pedagogies ; · the need for substantial shifts in the culture of schools, especially unsettling deficit views of students and their communities ; · a need to provide a more ‘consistent’ approach to teaching literacy across the school; · a need to focus School Improvement Plans in order to implement a clear focus on literacy learning; and, · a need to sustain professional learning to produce new knowledge and practice . The paper will conclude with suggestions for further research and possible reform projects into the primary school as a logic machine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cyclostationary models for the diagnostic signals measured on faulty rotating machineries have proved to be successful in many laboratory tests and industrial applications. The squared envelope spectrum has been pointed out as the most efficient indicator for the assessment of second order cyclostationary symptoms of damages, which are typical, for instance, of rolling element bearing faults. In an attempt to foster the spread of rotating machinery diagnostics, the current trend in the field is to reach higher levels of automation of the condition monitoring systems. For this purpose, statistical tests for the presence of cyclostationarity have been proposed during the last years. The statistical thresholds proposed in the past for the identification of cyclostationary components have been obtained under the hypothesis of having a white noise signal when the component is healthy. This need, coupled with the non-white nature of the real signals implies the necessity of pre-whitening or filtering the signal in optimal narrow-bands, increasing the complexity of the algorithm and the risk of losing diagnostic information or introducing biases on the result. In this paper, the authors introduce an original analytical derivation of the statistical tests for cyclostationarity in the squared envelope spectrum, dropping the hypothesis of white noise from the beginning. The effect of first order and second order cyclostationary components on the distribution of the squared envelope spectrum will be quantified and the effectiveness of the newly proposed threshold verified, providing a sound theoretical basis and a practical starting point for efficient automated diagnostics of machine components such as rolling element bearings. The analytical results will be verified by means of numerical simulations and by using experimental vibration data of rolling element bearings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new approach is proposed for obtaining a non-linear area-based equivalent model of power systems to express the inter-area oscillations using synchronised phasor measurements. The generators that remain coherent for inter-area disturbances over a wide range of operating conditions define the areas, and the reduced model is obtained by representing each area by an equivalent machine. The parameters of the reduced system are identified by processing the obtained measurements, and a non-linear Kalman estimator is then designed for the estimation of equivalent area angles and frequencies. The simulation of the approach on a two-area system shows substantial reduction of non-inter-area modes in the estimated angles. The proposed methods are also applied to a ten-machine system to illustrate the feasibility of the approach on larger and meshed networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the ever-increasing penetration level of wind power, the impacts of wind power on the power system are becoming more and more significant. Hence, it is necessary to systematically examine its impacts on the small signal stability and transient stability in order to find out countermeasures. As such, a comprehensive study is carried out to compare the dynamic performances of power system respectively with three widely-used power generators. First, the dynamic models are described for three types of wind power generators, i. e. the squirrel cage induction generator (SCIG), doubly fed induction generator (DFIG) and permanent magnet generator (PMG). Then, the impacts of these wind power generators on the small signal stability and transient stability are compared with that of a substituted synchronous generator (SG) in the WSCC three-machine nine-bus system by the eigenvalue analysis and dynamic time-domain simulations. Simulation results show that the impacts of different wind power generators are different under small and large disturbances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics and it can obtain a better solution in a reasonable time. Furthermore, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement which puts a fixed number of mapper/reducer on each machine. The comparison results show that the computation using our mapper/reducer placement is much cheaper than the computation using the conventional placement while still satisfying the computation deadline.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation technology. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches consider the energy consumption by physical machines only, but do not consider the energy consumption in communication network, in a data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement. In our preliminary research, we have proposed a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both physical machines and the communication network in a data center. Aiming at improving the performance and efficiency of the genetic algorithm, this paper presents a hybrid genetic algorithm for the energy-efficient virtual machine placement problem. Experimental results show that the hybrid genetic algorithm significantly outperforms the original genetic algorithm, and that the hybrid genetic algorithm is scalable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper an approach is presented for identification of a reduced model for coherent areas in power systems using phasor measurement units to represent the inter-area oscillations of the system. The generators which are coherent in a wide range of operating conditions form the areas in power systems and the reduced model is obtained by representing each area by an equivalent machine. The reduced nonlinear model is then identified based on the data obtained from measurement units. The simulation is performed on three test systems and the obtained results show high accuracy of identification process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is motivated by the need to efficiently machine the edges of ophthalmic polymer lenses for mounting in spectacle or instrument frames. The polymer materials used are required to have suitable optical characteristics such high refractive index and Abbe number, combined with low density and high scratch and impact resistance. Edge surface finish is an important aesthetic consideration; its quality is governed by the material removal operation and the physical properties of the material being processed. The wear behaviour of polymer materials is not as straightforward as for other materials due to their molecular and structural complexity, not to mention their time-dependent properties. Four commercial ophthalmic polymers have been studied in this work using nanoindentation techniques which are evaluated as tools for probing surface mechanical properties in order to better understand the grinding response of polymer materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Insulated rail joints (IRJs) are a primary component of the rail track safety and signalling systems. Rails are supported by two fishplates which are fastened by bolts and nuts and, with the support of sleepers and track ballast, form an integrated assembly. IRJ failure can result from progressive defects, the propagation of which is influenced by residual stresses in the rail. Residual stresses change significantly during service due to the complex deformation and damage effects associated with wheel rolling, sliding and impact. IRJ failures can occur when metal flows over the insulated rail gap (typically 6-8 mm width), breaks the electrically isolated section of track and results in malfunction of the track signalling system. In this investigation, residual stress measurements were obtained from rail-ends which had undergone controlled amounts of surface plastic deformation using a full scale wheel-on-track simulation test rig. Results were compared with those obtained from similar investigations performed on rail ends associated with ex-service IRJs. Residual stresses were measured by neutron diffraction at the Australian Nuclear Science and Technology Organisation (ANSTO). Measurements with constant gauge volume 3x3x3 mm3 were carried in the central vertical plane on 5mm thick sliced rail samples cut by an electric discharge machine (EDM). Stress evolution at the rail ends was found to exhibit characteristics similar to those of the ex-service rails, with a compressive zone of 5mm deep that is counterbalanced by a tension zone beneath, extending to a depth of around 15mm. However, in contrast to the ex-service rails, the type of stress distribution in the test-rig deformed samples was apparently different due to the localization of load under the particular test conditions. In the latter, in contrast with clear stress evolution, there was no obvious evolution of d0. Since d0 reflects rather long-term accumulation of crystal lattice damage and microstructural changes due to service load, the loading history of the test rig samples has not reached the same level as the ex-service rails. It is concluded that the wheel-on-rail simulation rig provides the potential capability for testing the wheel-rail rolling contact conditions in rails, rail ends and insulated rail joints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes an interactive installation work set in a large dome space. The installation is an audio and physical re-rendition of an interactive writing work. In the original work, the user interacted via keyboard and screen while online. This rendition of the work retains the online interaction, but also places the interaction within a physical space, where the main 'conversation' takes place by the participant-audience speaking through microphones and listening through headphones. The work now also includes voice and SMS input, using speech-to-text and text-to-speech conversion technologies, and audio and displayed text for output. These additions allow the participant-audience to co-author the work while they participate in audible conversation with keyword-triggering characters (bots). Communication in the space can be person-to-computer via microphone, keyboard, and phone; person-to-person via machine and within the physical space; computer-to- computer; and computer-to-person via audio and projected text.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an approach to automatically de-identify health records. In our approach, personal health information is identified using a Conditional Random Fields machine learning classifier, a large set of linguistic and lexical features, and pattern matching techniques. Identified personal information is then removed from the reports. The de-identification of personal health information is fundamental for the sharing and secondary use of electronic health records, for example for data mining and disease monitoring. The effectiveness of our approach is first evaluated on the 2007 i2b2 Shared Task dataset, a widely adopted dataset for evaluating de-identification techniques. Subsequently, we investigate the robustness of the approach to limited training data; we study its effectiveness on different type and quality of data by evaluating the approach on scanned pathology reports from an Australian institution. This data contains optical character recognition errors, as well as linguistic conventions that differ from those contained in the i2b2 dataset, for example different date formats. The findings suggest that our approach compares to the best approach from the 2007 i2b2 Shared Task; in addition, the approach is found to be robust to variations of training size, data type and quality in presence of sufficient training data.