882 resultados para Input
Resumo:
Summary Generalized Procrustes analysis and thin plate splines were employed to create an average 3D shape template of the proximal femur that was warped to the size and shape of a single 2D radiographic image of a subject. Mean absolute depth errors are comparable with previous approaches utilising multiple 2D input projections. Introduction Several approaches have been adopted to derive volumetric density (g cm-3) from a conventional 2D representation of areal bone mineral density (BMD, g cm-2). Such approaches have generally aimed at deriving an average depth across the areal projection rather than creating a formal 3D shape of the bone. Methods Generalized Procrustes analysis and thin plate splines were employed to create an average 3D shape template of the proximal femur that was subsequently warped to suit the size and shape of a single 2D radiographic image of a subject. CT scans of excised human femora, 18 and 24 scanned at pixel resolutions of 1.08 mm and 0.674 mm, respectively, were equally split into training (created 3D shape template) and test cohorts. Results The mean absolute depth errors of 3.4 mm and 1.73 mm, respectively, for the two CT pixel sizes are comparable with previous approaches based upon multiple 2D input projections. Conclusions This technique has the potential to derive volumetric density from BMD and to facilitate 3D finite element analysis for prediction of the mechanical integrity of the proximal femur. It may further be applied to other anatomical bone sites such as the distal radius and lumbar spine.
Resumo:
The field of research training (for students and supervisors) is becoming more heavily regulated by the Federal Government. At the same time, quality improvement imperatives are requiring staff across the University to have better access to information and knowledge about a wider range of activities each year. Within the Creative Industries Faculty at the Queensland University of Technology (QUT), the training provided to academic and research staff is organised differently and individually. This session will involve discussion of the dichotomies found in this differentiated approach to staff training, and begin a search for best practice through interaction and input from the audience.
Resumo:
Inverse dynamics is the most comprehensive method that gives access to the net joint forces and moments during walking. However it is based on assumptions (i.e., rigid segments linked by ideal joints) and it is known to be sensitive to the input data (e.g., kinematic derivatives, positions of joint centres and centre of pressure, inertial parameters). Alternatively, transducers can be used to measure directly the load applied on the residuum of transfemoral amputees. So, the purpose of this study was to compare the forces and moments applied on a prosthetic knee measured directly with the ones calculated by three inverse dynamics computations - corresponding to 3 and 2 segments, and « ground reaction vector technique » - during the gait of one patient. The maximum RMSEs between the estimated and directly measured forces (i.e., 56 N) and moment (i.e., 5 N.m) were relatively small. However the dynamic outcomes of the prosthetic components (i.e., absorption of the foot, friction and limit stop of the knee) were only partially assessed with inverse dynamic methods.
Resumo:
Security-critical communications devices must be evaluated to the highest possible standards before they can be deployed. This process includes tracing potential information flow through the device's electronic circuitry, for each of the device's operating modes. Increasingly, however, security functionality is being entrusted to embedded software running on microprocessors within such devices, so new strategies are needed for integrating information flow analyses of embedded program code with hardware analyses. Here we show how standard compiler principles can augment high-integrity security evaluations to allow seamless tracing of information flow through both the hardware and software of embedded systems. This is done by unifying input/output statements in embedded program execution paths with the hardware pins they access, and by associating significant software states with corresponding operating modes of the surrounding electronic circuitry.
Resumo:
We investigate Multiple-Input and Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) systems behavior in indoor populated environments that have line-of-site (LoS) between transmitter and receiver arrays. The in-house built MIMO-OFDM packet transmission demonstrator, equipped with four transmitters and four receivers, has been utilized to perform channel measurements at 5.2 GHz. Measurements have been performed using 0 to 3 pedestrians with different antenna arrays (2 £ 2, 3 £ 3 and 4 £ 4). The maximum average capacity for the 2x2 deterministic Fixed SNR scenario is 8.5 dB compared to the 4x4 deterministic scenario that has a maximum average capacity of 16.2 dB, thus an increment of 8 dB in average capacity has been measured when the array size increases from 2x2 to 4x4. In addition a regular variation has been observed for Random scenarios compared to the deterministic scenarios. An incremental trend in average channel capacity for both deterministic and random pedestrian movements has been observed with increasing number of pedestrian and antennas. In deterministic scenarios, the variations in average channel capacity are more noticeable than for the random scenarios due to a more prolonged and controlled body-shadowing effect. Moreover due to the frequent Los blocking and fixed transmission power a slight decrement have been observed in the spread between the maximum and minimum capacity with random fixed Tx power scenario.
Resumo:
For many organizations, maintaining and upgrading enterprise resource planning (ERP) systems (large packaged application software) is often far more costly than the initial implementation. Systematic planning and knowledge of the fundamental maintenance processes and maintenance-related management data are required in order to effectively and efficiently administer maintenance activities. This paper reports a revelatory case study of Government Services Provider (GSP), a high-performing ERP service provider to government agencies in Australia. GSP ERP maintenance-process and maintenance-data standards are compared with the IEEE/EIA 12207 software engineering standard for custom software, also drawing upon published research, to identify how practices in the ERP context diverge from the IEEE standard. While the results show that many best practices reflected in the IEEE standard have broad relevance to software generally, divergent practices in the ERP context necessitate a shift in management focus, additional responsibilities, and different maintenance decision criteria. Study findings may provide useful guidance to practitioners, as well as input to the IEEE and other related standards.
Resumo:
Both creative industries and innovation are slippery fish to handle conceptually, to say nothing of their relationship. This paper faces, first, the problems of definitions and data that can bedevil clear analysis of the creative industries. It then presents a method of data generation and analysis that has been developed to address these problems while providing an evidence pathway supporting the movement in policy thinking from creative output (through industry sectors) to creative input to the broader economy (through a focus on occupations/activity). Facing the test of policy relevance, this work has assisted in moving the ongoing debates about the creative industries toward innovation thinking by developing the concept of creative occupations as input value. Creative inputs as 'enablers' arguably has parallels with the way ICTs have been shown to be broad enablers of economic growth. We conclude with two short instantiations of the policy relevance of this concept: design as a creative input; and creative human capital and education.
Resumo:
Areal bone mineral density (aBMD) is the most common surrogate measurement for assessing the bone strength of the proximal femur associated with osteoporosis. Additional factors, however, contribute to the overall strength of the proximal femur, primarily the anatomical geometry. Finite element analysis (FEA) is an effective and widely used computerbased simulation technique for modeling mechanical loading of various engineering structures, providing predictions of displacement and induced stress distribution due to the applied load. FEA is therefore inherently dependent upon both density and anatomical geometry. FEA may be performed on both three-dimensional and two-dimensional models of the proximal femur derived from radiographic images, from which the mechanical stiffness may be redicted. It is examined whether the outcome measures of two-dimensional FEA, two-dimensional, finite element analysis of X-ray images (FEXI), and three-dimensional FEA computed stiffness of the proximal femur were more sensitive than aBMD to changes in trabecular bone density and femur geometry. It is assumed that if an outcome measure follows known trends with changes in density and geometric parameters, then an increased sensitivity will be indicative of an improved prediction of bone strength. All three outcome measures increased non-linearly with trabecular bone density, increased linearly with cortical shell thickness and neck width, decreased linearly with neck length, and were relatively insensitive to neck-shaft angle. For femoral head radius, aBMD was relatively insensitive, with two-dimensional FEXI and threedimensional FEA demonstrating a non-linear increase and decrease in sensitivity, respectively. For neck anteversion, aBMD decreased non-linearly, whereas both two-dimensional FEXI and three dimensional FEA demonstrated a parabolic-type relationship, with maximum stiffness achieved at an angle of approximately 15o. Multi-parameter analysis showed that all three outcome measures demonstrated their highest sensitivity to a change in cortical thickness. When changes in all input parameters were considered simultaneously, three and twodimensional FEA had statistically equal sensitivities (0.41±0.20 and 0.42±0.16 respectively, p = ns) that were significantly higher than the sensitivity of aBMD (0.24±0.07; p = 0.014 and 0.002 for three-dimensional and two-dimensional FEA respectively). This simulation study suggests that since mechanical integrity and FEA are inherently dependent upon anatomical geometry, FEXI stiffness, being derived from conventional two-dimensional radiographic images, may provide an improvement in the prediction of bone strength of the proximal femur than currently provided by aBMD.
Resumo:
Background: The proportion of older individuals in the driving population is predicted to increase in the next 50 years. This has important implications for driving safety as abilities which are important for safe driving, such as vision (which accounts for the majority of the sensory input required for driving), processing ability and cognition have been shown to decline with age. The current methods employed for screening older drivers upon re-licensure are also vision based. This study, which investigated social, behavioural and professional aspects involved with older drivers, aimed to determine: (i) if the current visual standards in place for testing upon re-licensure are effective in reducing the older driver fatality rate in Australia; (ii) if the recommended visual standards are actually implemented as part of the testing procedures by Australian optometrists; and (iii) if there are other non-standardised tests which may be better at predicting the on-road incident-risk (including near misses and minor incidents) in older drivers than those tests recommended in the standards. Methods: For the first phase of the study, state-based age- and gender-stratified numbers of older driver fatalities for 2000-2003 were obtained from the Australian Transportation Safety Bureau database. Poisson regression analyses of fatality rates were considered by renewal frequency and jurisdiction (as separate models), adjusting for possible confounding variables of age, gender and year. For the second phase, all practising optometrists in Australia were surveyed on the vision tests they conduct in consultations relating to driving and their knowledge of vision requirements for older drivers. Finally, for the third phase of the study to investigate determinants of on-road incident risk, a stratified random sample of 600 Brisbane residents aged 60 years and were selected and invited to participate using an introductory letter explaining the project requirements. In order to capture the number and type of road incidents which occurred for each participant over 12 months (including near misses and minor incidents), an important component of the prospective research study was the development and validation of a driving diary. The diary was a tool in which incidents that occurred could be logged at that time (or very close in time to which they occurred) and thus, in comparison with relying on participant memory over time, recall bias of incident occurrence was minimised. Association between all visual tests, cognition and scores obtained for non-standard functional tests with retrospective and prospective incident occurrence was investigated. Results: In the first phase,rivers aged 60-69 years had a 33% lower fatality risk (Rate Ratio [RR] = 0.75, 95% CI 0.32-1.77) in states with vision testing upon re-licensure compared with states with no vision testing upon re-licensure, however, because the CIs are wide, crossing 1.00, this result should be regarded with caution. However, overall fatality rates and fatality rates for those aged 70 years and older (RR=1.17, CI 0.64-2.13) did not differ between states with and without license renewal procedures, indicating no apparent benefit in vision testing legislation. For the second phase of the study, nearly all optometrists measured visual acuity (VA) as part of a vision assessment for re-licensing, however, 20% of optometrists did not perform any visual field (VF) testing and only 20% routinely performed automated VF on older drivers, despite the standards for licensing advocating automated VF as part of the vision standard. This demonstrates the need for more effective communication between the policy makers and those responsible for carrying out the standards. It may also indicate that the overall higher driver fatality rate in jurisdictions with vision testing requirements is resultant as the tests recommended by the standards are only partially being conducted by optometrists. Hence a standardised protocol for the screening of older drivers for re-licensure across the nation must be established. The opinions of Australian optometrists with regard to the responsibility of reporting older drivers who fail to meet the licensing standards highlighted the conflict between maintaining patient confidentiality or upholding public safety. Mandatory reporting requirements of those drivers who fail to reach the standards necessary for driving would minimise potential conflict between the patient and their practitioner, and help maintain patient trust and goodwill. The final phase of the PhD program investigated the efficacy of vision, functional and cognitive tests to discriminate between at-risk and safe older drivers. Nearly 80% of the participants experienced an incident of some form over the prospective 12 months, with the total incident rate being 4.65/10 000 km. Sixty-three percent reported having a near miss and 28% had a minor incident. The results from the prospective diary study indicate that the current vision screening tests (VA and VF) used for re-licensure do not accurately predict older drivers who are at increased odds of having an on-road incident. However, the variation in visual measurements of the cohort was narrow, also affecting the results seen with the visual functon questionnaires. Hence a larger cohort with greater variability should be considered for a future study. A slightly lower cognitive level (as measured with the Mini-Mental State Examination [MMSE]) did show an association with incident involvement as did slower reaction time (RT), however the Useful-Field-of-View (UFOV) provided the most compelling results of the study. Cut-off values of UFOV processing (>23.3ms), divided attention (>113ms), selective attention (>258ms) and overall score (moderate/ high/ very high risk) were effective in determining older drivers at increased odds of having any on-road incident and the occurrence of minor incidents. Discussion: The results have shown that for the 60-69 year age-group, there is a potential benefit in testing vision upon licence renewal. However, overall fatality rates and fatality rates for those aged 70 years and older indicated no benefit in vision testing legislation and suggests a need for inclusion of screening tests which better predict on-road incidents. Although VA is routinely performed by Australian optometrists on older drivers renewing their licence, VF is not. Therefore there is a need for a protocol to be developed and administered which would result in standardised methods conducted throughout the nation for the screening of older drivers upon re-licensure. Communication between the community, policy makers and those conducting the protocol should be maximised. By implementing a standardised screening protocol which incorporates a level of mandatory reporting by the practitioner, the ethical dilemma of breaching patient confidentiality would also be resolved. The tests which should be included in this screening protocol, however, cannot solely be ones which have been implemented in the past. In this investigation, RT, MMSE and UFOV were shown to be better determinants of on-road incidents in older drivers than VA and VF, however, as previously mentioned, there was a lack of variability in visual status within the cohort. Nevertheless, it is the recommendation from this investigation, that subject to appropriate sensitivity and specificity being demonstrated in the future using a cohort with wider variation in vision, functional performance and cognition, these tests of cognition and information processing should be added to the current protocol for the screening of older drivers which may be conducted at licensing centres across the nation.
Resumo:
Australia’s National Review of Visual Education (DEEWR, 2009) asserts the primacy of visual language ability, or ‘visuacy” in problem-solving. This paper reports on a recent university/schools research project with ‘at risk’ middle school students in which visuacy was promoted as a primary medium for obtaining data relating to issues of immediate concern to the students. Using a students-as-researchers approach, the project investigated middle school students’ perspectives on school engagement and disengagement. In this project, novice researchers used a variety of data gathering methods including photography, video interviews and drawn images as well as more traditional verbal methods, such as interviews, and quantitative methods, such as questionnaires. Engaging student imagination was a key focus of the approach taken by the project, acknowledging that student participants may be reluctant to enter dialogue with teachers and researchers on matters to which they have previously had little input. Students who have previously been marginalized and prevented from contributing their voices to educational forums often have difficulty in adjusting to the novelty of collaborative research with adults (Rudduck, 2003) and may be uncertain of their own place in the relationship that defines teacher/student interactions. It is argued that the project’s promotion of visuacy, alongside more traditional literacies and numeracy in education research, helped to overcome these concerns, engaged the imaginations of the student researchers, and provided a medium for the expression of the voices of marginalised young people.
Resumo:
One of the ways in which university departments and faculties can enhance the quality of learning and assessment is to develop a ‘well thought out criterion‐referenced assessment system’ (Biggs, 2003, p. 271). In designing undergraduate degrees (courses) this entails making decisions about the levelling of expectations across different years through devising objectives and their corresponding criteria and standards: a process of alignment analogous to what happens in unit (subject) design. These decisions about levelling have important repercussions in terms of supporting students’ work‐related learning, especially in relation to their ability to cope with the increasing cognitive and skill demands made on them as they progress through their studies. They also affect the accountability of teacher judgments of students’ responses to assessment tasks, achievement of unit objectives and, ultimately, whether students are awarded their degrees and are sufficiently prepared for the world of work. Research reveals that this decision‐making process is rarely underpinned by an explicit educational rationale (Morgan et al, 2002). The decision to implement criterion referenced assessment in an undergraduate microbiology degree was the impetus for developing such a rationale because of the implications for alignment, and therefore ‘levelling’ of expectations across different years of the degree. This paper provides supporting evidence for a multi‐pronged approach to levelling, through backward mapping of two revised units (foundation and exit year). This approach adheres to the principles of alignment while combining a work‐related approach (via industry input) with the blended disciplinary and learner‐centred approaches proposed by Morgan et al. (2002). It is suggested that this multi‐pronged approach has the potential for making expectations, especially work‐related ones across different year levels of degrees, more explicit to students and future employers.
Resumo:
The accuracy of data derived from linked-segment models depends on how well the system has been represented. Previous investigations describing the gait of persons with partial foot amputation did not account for the unique anthropometry of the residuum or the inclusion of a prosthesis and footwear in the model and, as such, are likely to have underestimated the magnitude of the peak joint moments and powers. This investigation determined the effect of inaccuracies in the anthropometric input data on the kinetics of gait. Toward this end, a geometric model was developed and validated to estimate body segment parameters of various intact and partial feet. These data were then incorporated into customized linked-segment models, and the kinetic data were compared with that obtained from conventional models. Results indicate that accurate modeling increased the magnitude of the peak hip and knee joint moments and powers during terminal swing. Conventional inverse dynamic models are sufficiently accurate for research questions relating to stance phase. More accurate models that account for the anthropometry of the residuum, prosthesis, and footwear better reflect the work of the hip extensors and knee flexors to decelerate the limb during terminal swing phase.
Resumo:
Access to paid maternity leave is a major issue on our current social, political and policy agenda. Better paid maternity leave provisions are seen as one of the ways to address the inequality experienced by women as a result of the gendered nature of family responsibilities. One Brisbane kindergarten is leading the way in providing maternity leave to its employees. The Director of Campus Kindergarten, Megan GIBSON, explains how, through staff input, the centre has developed a comprehensive parental leave policy that sits hand in hand with their professional development policy.
Resumo:
This paper addresses the following problem: given two or more business process models, create a process model that is the union of the process models given as input. In other words, the behavior of the produced process model should encompass that of the input models. The paper describes an algorithm that produces a single configurable process model from an arbitrary collection of process models. The algorithm works by extracting the common parts of the input process models, creating a single copy of them, and appending the differences as branches of configurable connectors. This way, the merged process model is kept as small as possible, while still capturing all the behavior of the input models. Moreover, analysts are able to trace back from which original model(s) does a given element in the merged model come from. The algorithm has been prototyped and tested against process models taken from several application domains.
Resumo:
Purpose – The purpose of this paper is to examine the use of bid information, including both price and non-price factors in predicting the bidder’s performance. Design/methodology/approach – The practice of the industry was first reviewed. Data on bid evaluation and performance records of the successful bids were then obtained from the Hong Kong Housing Department, the largest housing provider in Hong Kong. This was followed by the development of a radial basis function (RBF) neural network based performance prediction model. Findings – It is found that public clients are more conscientious and include non-price factors in their bid evaluation equations. With the input variables used the information is available at the time of the bid and the output variable is the project performance score recorded during work in progress achieved by the successful bidder. It was found that past project performance score is the most sensitive input variable in predicting future performance. Research limitations/implications – The paper shows the inadequacy of using price alone for bid award criterion. The need for a systemic performance evaluation is also highlighted, as this information is highly instrumental for subsequent bid evaluations. The caveat for this study is that the prediction model was developed based on data obtained from one single source. Originality/value – The value of the paper is in the use of an RBF neural network as the prediction tool because it can model non-linear function. This capability avoids tedious ‘‘trial and error’’ in deciding the number of hidden layers to be used in the network model. Keywords Hong Kong, Construction industry, Neural nets, Modelling, Bid offer spreads Paper type Research paper