935 resultados para Threshold mold


Relevância:

10.00% 10.00%

Publicador:

Resumo:

3D models of long bones are being utilised for a number of fields including orthopaedic implant design. Accurate reconstruction of 3D models is of utmost importance to design accurate implants to allow achieving a good alignment between two bone fragments. Thus for this purpose, CT scanners are employed to acquire accurate bone data exposing an individual to a high amount of ionising radiation. Magnetic resonance imaging (MRI) has been shown to be a potential alternative to computed tomography (CT) for scanning of volunteers for 3D reconstruction of long bones, essentially avoiding the high radiation dose from CT. In MRI imaging of long bones, the artefacts due to random movements of the skeletal system create challenges for researchers as they generate inaccuracies in the 3D models generated by using data sets containing such artefacts. One of the defects that have been observed during an initial study is the lateral shift artefact occurring in the reconstructed 3D models. This artefact is believed to result from volunteers moving the leg during two successive scanning stages (the lower limb has to be scanned in at least five stages due to the limited scanning length of the scanner). As this artefact creates inaccuracies in the implants designed using these models, it needs to be corrected before the application of 3D models to implant design. Therefore, this study aimed to correct the lateral shift artefact using 3D modelling techniques. The femora of five ovine hind limbs were scanned with a 3T MRI scanner using a 3D vibe based protocol. The scanning was conducted in two halves, while maintaining a good overlap between them. A lateral shift was generated by moving the limb several millimetres between two scanning stages. The 3D models were reconstructed using a multi threshold segmentation method. The correction of the artefact was achieved by aligning the two halves using the robust iterative closest point (ICP) algorithm, with the help of the overlapping region between the two. The models with the corrected artefact were compared with the reference model generated by CT scanning of the same sample. The results indicate that the correction of the artefact was achieved with an average deviation of 0.32 ± 0.02 mm between the corrected model and the reference model. In comparison, the model obtained from a single MRI scan generated an average error of 0.25 ± 0.02 mm when compared with the reference model. An average deviation of 0.34 ± 0.04 mm was seen when the models generated after the table was moved were compared to the reference models; thus, the movement of the table is also a contributing factor to the motion artefacts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traffic safety studies demand more than what current micro-simulation models can provide as they presume that all drivers of motor vehicles exhibit safe behaviours. Several car-following models are used in various micro-simulation models. This research compares the mainstream car following models’ capabilities of emulating precise driver behaviour parameters such as headways and Time to Collisions. The comparison firstly illustrates which model is more robust in the metric reproduction. Secondly, the study conducted a series of sensitivity tests to further explore the behaviour of each model. Based on the outcome of these two steps exploration of the models, a modified structure and parameters adjustment for each car-following model is proposed to simulate more realistic vehicle movements, particularly headways and Time to Collision, below a certain critical threshold. NGSIM vehicle trajectory data is used to evaluate the modified models performance to assess critical safety events within traffic flow. The simulation tests outcomes indicate that the proposed modified models produce better frequency of critical Time to Collision than the generic models, while the improvement on the headway is not significant. The outcome of this paper facilitates traffic safety assessment using microscopic simulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose Exercise for Health was a randomized, controlled trial designed to evaluate two modes of delivering (face-to-face [FtF] and over-the-telephone [Tel]) an 8-month translational exercise intervention, commencing 6-weeks post-breast cancer surgery (PS). Methods Outcomes included quality of life (QoL), function (fitness and upper-body) and treatment-related side effects (fatigue, lymphoedema, body mass index, menopausal symptoms, anxiety, depression and pain). Generalised estimating equation modelling determined time (baseline [5-weeks PS], mid-intervention [6-months PS], post-intervention [12-months PS]), group (FtF, Tel, Usual Care [UC]) and time-by-group effects. 194 women representative of the breast cancer population were randomised to the FtF (n=67), Tel (n=67) and UC (n=60) groups. Results: There were significant (p<0.05) interaction effects on QoL, fitness and fatigue, with differences being observed between the treatment groups and the UC group. Trends observed for the treatment groups were similar. The treatment groups reported improved QoL, fitness and fatigue over time and changes observed between baseline and post-intervention were clinically relevant. In contrast, the UC group experienced no change, or worsening QoL, fitness and fatigue, mid-intervention. Although improvements in the UC group occurred by 12-months post-surgery, the change did not meet the clinically relevant threshold. There were no differences in other treatment-related side-effects between groups. Conclusion This translational intervention trial, delivered either face-to-face or over-the-telephone, supports exercise as a form of adjuvant breast cancer therapy that can prevent declines in fitness and function during treatment and optimise recovery post-treatment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cognitive radio is an emerging technology proposing the concept of dynamic spec- trum access as a solution to the looming problem of spectrum scarcity caused by the growth in wireless communication systems. Under the proposed concept, non- licensed, secondary users (SU) can access spectrum owned by licensed, primary users (PU) so long as interference to PU are kept minimal. Spectrum sensing is a crucial task in cognitive radio whereby the SU senses the spectrum to detect the presence or absence of any PU signal. Conventional spectrum sensing assumes the PU signal as ‘stationary’ and remains in the same activity state during the sensing cycle, while an emerging trend models PU as ‘non-stationary’ and undergoes state changes. Existing studies have focused on non-stationary PU during the transmission period, however very little research considered the impact on spectrum sensing when the PU is non-stationary during the sensing period. The concept of PU duty cycle is developed as a tool to analyse the performance of spectrum sensing detectors when detecting non-stationary PU signals. New detectors are also proposed to optimise detection with respect to duty cycle ex- hibited by the PU. This research consists of two major investigations. The first stage investigates the impact of duty cycle on the performance of existing detec- tors and the extent of the problem in existing studies. The second stage develops new detection models and frameworks to ensure the integrity of spectrum sensing when detecting non-stationary PU signals. The first investigation demonstrates that conventional signal model formulated for stationary PU does not accurately reflect the behaviour of a non-stationary PU. Therefore the performance calculated and assumed to be achievable by the conventional detector does not reflect actual performance achieved. Through analysing the statistical properties of duty cycle, performance degradation is proved to be a problem that cannot be easily neglected in existing sensing studies when PU is modelled as non-stationary. The second investigation presents detectors that are aware of the duty cycle ex- hibited by a non-stationary PU. A two stage detection model is proposed to improve the detection performance and robustness to changes in duty cycle. This detector is most suitable for applications that require long sensing periods. A second detector, the duty cycle based energy detector is formulated by integrat- ing the distribution of duty cycle into the test statistic of the energy detector and suitable for short sensing periods. The decision threshold is optimised with respect to the traffic model of the PU, hence the proposed detector can calculate average detection performance that reflect realistic results. A detection framework for the application of spectrum sensing optimisation is proposed to provide clear guidance on the constraints on sensing and detection model. Following this framework will ensure the signal model accurately reflects practical behaviour while the detection model implemented is also suitable for the desired detection assumption. Based on this framework, a spectrum sensing optimisation algorithm is further developed to maximise the sensing efficiency for non-stationary PU. New optimisation constraints are derived to account for any PU state changes within the sensing cycle while implementing the proposed duty cycle based detector.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most current computer systems authorise the user at the start of a session and do not detect whether the current user is still the initial authorised user, a substitute user, or an intruder pretending to be a valid user. Therefore, a system that continuously checks the identity of the user throughout the session is necessary without being intrusive to end-user and/or effectively doing this. Such a system is called a continuous authentication system (CAS). Researchers have applied several approaches for CAS and most of these techniques are based on biometrics. These continuous biometric authentication systems (CBAS) are supplied by user traits and characteristics. One of the main types of biometric is keystroke dynamics which has been widely tried and accepted for providing continuous user authentication. Keystroke dynamics is appealing for many reasons. First, it is less obtrusive, since users will be typing on the computer keyboard anyway. Second, it does not require extra hardware. Finally, keystroke dynamics will be available after the authentication step at the start of the computer session. Currently, there is insufficient research in the CBAS with keystroke dynamics field. To date, most of the existing schemes ignore the continuous authentication scenarios which might affect their practicality in different real world applications. Also, the contemporary CBAS with keystroke dynamics approaches use characters sequences as features that are representative of user typing behavior but their selected features criteria do not guarantee features with strong statistical significance which may cause less accurate statistical user-representation. Furthermore, their selected features do not inherently incorporate user typing behavior. Finally, the existing CBAS that are based on keystroke dynamics are typically dependent on pre-defined user-typing models for continuous authentication. This dependency restricts the systems to authenticate only known users whose typing samples are modelled. This research addresses the previous limitations associated with the existing CBAS schemes by developing a generic model to better identify and understand the characteristics and requirements of each type of CBAS and continuous authentication scenario. Also, the research proposes four statistical-based feature selection techniques that have highest statistical significance and encompasses different user typing behaviors which represent user typing patterns effectively. Finally, the research proposes the user-independent threshold approach that is able to authenticate a user accurately without needing any predefined user typing model a-priori. Also, we enhance the technique to detect the impostor or intruder who may take over during the entire computer session.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We analyzed mesopic rod and S-cone interactions in terms of their contributions to the blue-yellow opponent pathway. Stimuli were generated using a 4-primary colorimeter. Mixed rod and S-cone modulation thresholds (constant L-, M-cone excitation) were measured as a function of their phase difference. Modulation amplitude was equated using threshold units and contrast ratios. This study identified three interaction types: (1) A linear and antagonistic rod:S-cone interaction, (2) probability summation (3) and a previously unidentified mutual nonlinear reinforcement. Linear rod:S-cone interactions occur within the blue-yellow opponent pathway. Probability summation involves signaling by different post-receptoral pathways. The origin of the nonlinear reinforcement is possibly at the photoreceptors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Psittacine beak and feather disease (PBFD) has a broad host range and is widespread in wild and captive psittacine populations in Asia, Africa, the Americas, Europe and Australasia. Beak and feather disease circovirus (BFDV) is the causative agent. BFDV has an ~2 kb single stranded circular DNA genome encoding just two proteins (Rep and CP). In this study we provide support for demarcation of BFDV strains by phylogenetic analysis of 65 complete genomes from databases and 22 new BFDV sequences isolated from infected psittacines in South Africa. We propose 94% genome-wide sequence identity as a strain demarcation threshold, with isolates sharing > 94% identity belonging to the same strain, and strain subtypes sharing> 98% identity. Currently, BFDV diversity falls within 14 strains, with five highly divergent isolates from budgerigars probably representing a new species of circovirus with three strains (budgerigar circovirus; BCV-A, -B and -C). The geographical distribution of BFDV and BCV strains is strongly linked to the international trade in exotic birds; strains with more than one host are generally located in the same geographical area. Lastly, we examined BFDV and BCV sequences for evidence of recombination, and determined that recombination had occurred in most BFDV and BCV strains. We established that there were two globally significant recombination hotspots in the viral genome: the first is along the entire intergenic region and the second is in the C-terminal portion of the CP ORF. The implications of our results for the taxonomy and classification of circoviruses are discussed. © 2011 SGM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines the paradoxical and ubiquitous nature of Butler’s heterosexual matrix, and opens it up to an alternative Deleuzian analysis. Drawing on stories and art works produced in a collective biography workshop on girls and sexuality this paper extends previous work on the subversion of the heterosexual matrix undertaken by Renold and Ringrose (2008). The paper moves, as they do, from a molar to a molecular analysis, but extends that work by re-thinking the girl/subject in terms of Deleuze and Guattari’s endlessly transforming multiplicities where “the self is only a threshold, a door, a becoming between two multiplicities” (Deleuze and Guattari, 1987: 249)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent attention in education within many western contexts has focused on improved outcomes for students, with a particular focus on closing the gap between those who come from disadvantaged backgrounds and the rest of the student population. Much of this attention has supported a set of simplistic solutions to improving scores on high stakes standardized tests. The collateral damage (Nichols & Berliner, 2007) of such responses includes a narrowing of the curriculum, plateaus in gain scores on the tests, and unproductive blame games aimed by the media and politicians at teachers and communities (Nichols & Berliner, 2007; Synder, 2008). Alternative approaches to improving the quality and equity of schooling remain as viable alternatives to these measures. As an example in a recent study of school literacy reform in low SES schools, Luke, Woods and Dooley (2011) argued for the increase of substantive content and intellectual quality of the curriculum as a necessary means to re-engaging middle school students, improving outcomes of schooling and achieving a high quality, high equity system. The MediaClub is an afterschool program for students in years 4 to 7 (9-12 year old) at a primary school in a low SES area of a large Australian city. It is run as part of an Australian Research Council funded research project. The aim of the program has been to provide an opportunity for students to gain expertise in digital technologies and media literacies in an afterschool setting. It was hypothesized that this expertise might then be used to shift the ways of being literate that these students had to call on within classroom teaching and learning events. Each term, there is a different focus on digital media, and information and communication technology (ICT) activities in the MediaClub. The work detailed in this chapter relates to a robotics program presented as one of the modules within this afterschool setting. As part of the program, the participants were challenged to find creative solutions to problems in a constructivist-learning environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The selection of appropriate analogue materials is a central consideration in the design of realistic physical models. We investigate the rheology of highly-filled silicone polymers in order to find materials with a power-law strain-rate softening rheology suitable for modelling rock deformation by dislocation creep and report the rheological properties of the materials as functions of the filler content. The mixtures exhibit strain-rate softening behaviour but with increasing amounts of filler become strain-dependent. For the strain-independent viscous materials, flow laws are presented while for strain-dependent materials the relative importance of strain and strain rate softening/hardening is reported. If the stress or strain rate is above a threshold value some highly-filled silicone polymers may be considered linear visco-elastic (strain independent) and power-law strain-rate softening. The power-law exponent can be raised from 1 to ~3 by using mixtures of high-viscosity silicone and plasticine. However, the need for high shear strain rates to obtain the power-law rheology imposes some restrictions on the usage of such materials for geodynamic modelling. Two simple shear experiments are presented that use Newtonian and power-law strain-rate softening materials. The results demonstrate how materials with power-law rheology result in better strain localization in analogue experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acknowledging the recent call to review design creativity and consideration of the body's affective states in education, this paper explores how desire, conceptualized as an immanent force (Deleuze & Guattari, 1987) and an irresistible force (Burke, 1753) can be a means of deeper engagement within the design studio. Positing 'disruption or blockage' as a key agent which propels subjects from fields of normalcy to fields of otherness, and subsequently mobilises distinct modes of desire, this paper takes Edmund Burke's Romantic sublime and Patricia Yaeger's feminine sublime as critical lenses through which to review a first year interior program posited around the body. The paper highlights how the embodiment of 'desirous processes' within the design program and relational encounters within the studio represent an overarching pedagogical 'hinge' (Ellsworth, 2005). Rather than being a point of beginning, the start of first year is seen and experienced as a threshold opening to a new rhythm in a proces of becoming that is already underway.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The emergence of highly chloroquine (CQ) resistant P. vivax in Southeast Asia has created an urgent need for an improved understanding of the mechanisms of drug resistance in these parasites, the development of robust tools for defining the spread of resistance, and the discovery of new antimalarial agents. The ex vivo Schizont Maturation Test (SMT), originally developed for the study of P. falciparum, has been modified for P. vivax. We retrospectively analysed the results from 760 parasite isolates assessed by the modified SMT to investigate the relationship between parasite growth dynamics and parasite susceptibility to antimalarial drugs. Previous observations of the stage-specific activity of CQ against P. vivax were confirmed, and shown to have profound consequences for interpretation of the assay. Using a nonlinear model we show increased duration of the assay and a higher proportion of ring stages in the initial blood sample were associated with decreased effective concentration (EC50) values of CQ, and identify a threshold where these associations no longer hold. Thus, starting composition of parasites in the SMT and duration of the assay can have a profound effect on the calculated EC50 for CQ. Our findings indicate that EC50 values from assays with a duration less than 34 hours do not truly reflect the sensitivity of the parasite to CQ, nor an assay where the proportion of ring stage parasites at the start of the assay does not exceed 66%. Application of this threshold modelling approach suggests that similar issues may occur for susceptibility testing of amodiaquine and mefloquine. The statistical methodology which has been developed also provides a novel means of detecting stage-specific drug activity for new antimalarials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a novel technique for segmenting an audio stream into homogeneous regions according to speaker identities, background noise, music, environmental and channel conditions. Audio segmentation is useful in audio diarization systems, which aim to annotate an input audio stream with information that attributes temporal regions of the audio into their specific sources. The segmentation method introduced in this paper is performed using the Generalized Likelihood Ratio (GLR), computed between two adjacent sliding windows over preprocessed speech. This approach is inspired by the popular segmentation method proposed by the pioneering work of Chen and Gopalakrishnan, using the Bayesian Information Criterion (BIC) with an expanding search window. This paper will aim to identify and address the shortcomings associated with such an approach. The result obtained by the proposed segmentation strategy is evaluated on the 2002 Rich Transcription (RT-02) Evaluation dataset, and a miss rate of 19.47% and a false alarm rate of 16.94% is achieved at the optimal threshold.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Chemistry Discipline Network was funded in mid-2011, with the aim of improving communication between chemistry academics in Australia. In our first year of operation, we have grown to over 100 members, established a web presence, and produced substantial mapping reports on chemistry teaching in Australia. We are now working on the definition of standards for a chemistry degree based on the Threshold Learning Outcomes published by the Learning and Teaching Academic Standards Project.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The first year of a property degree program is a time to establish threshold concept knowledge to acculturise students into their discipline or professional group. Due to the foundational nature of first year in many property degrees, students are enrolled in large, multi-disciplinary classes. There are several challenges in the delivery of large first year multi-disciplinary units to engage the student in a community of leaning to aid in student retention. Through action based research this study shows how social networking, particularly Facebook, can be used to create a sense of community across large, multi-disciplinary units to illicit ‘real time’ feedback from students and encourage peer to peer learning. This study assesses the benefits of using social media and considers the potential limitations of this medium.