334 resultados para Impure sets
Resumo:
Multiple marker sets and models are currently available for assessing foot and ankle kinematics in gait. Despite the presence of such a wide variety of models, the reporting of methodological designs remains inconsistent and lacks clearly defined standards. This review highlights the variability found when reporting biomechanical model parameters, methodological design, and model reliability. Further, the review clearly demonstrates the need for a consensus of what methodological considerations to report in manuscripts, which focus on the topic of foot and ankle biomechanics. We propose five minimum reporting standards, that we believe will ensure the transparency of methods and begin to allow the community to move towards standard modelling practice. The strict adherence to these standards should ultimately improve the interpretation and clinical useability of foot and ankle marker sets and their corresponding models.
Resumo:
The complex interaction of the bones of the foot has been explored in detail in recent years, which has led to the acknowledgement in the biomechanics community that the foot can no longer be considered as a single rigid segment. With the advance of motion analysis technology it has become possible to quantify the biomechanics of simplified units or segments that make up the foot. Advances in technology coupled with reducing hardware prices has resulted in the uptake of more advanced tools available for clinical gait analysis. The increased use of these techniques in clinical practice requires defined standards for modelling and reporting of foot and ankle kinematics. This systematic review aims to provide a critical appraisal of commonly used foot and ankle marker sets designed to assess kinematics and thus provide a theoretical background for the development of modelling standards.
Resumo:
Over the past decade our understanding of foot function has increased significantly[1,2]. Our understanding of foot and ankle biomechanics appears to be directly correlated to advances in models used to assess and quantify kinematic parameters in gait. These advances in models in turn lead to greater detail in the data. However, we must consider that the level of complexity is determined by the question or task being analysed. This systematic review aims to provide a critical appraisal of commonly used marker sets and foot models to assess foot and ankle kinematics in a wide variety of clinical and research purposes.
Resumo:
The complex interaction of the bones of the foot has been explored in detail in recent years, which has led to the acknowledgement in the biomechanics community that the foot can no longer be considered as a single rigid segment. With the advance of motion analysis technology it has become possible to quantify the biomechanics of simplified units or segments that make up the foot. Advances in technology coupled with reducing hardware prices has resulted in the uptake of more advanced tools available for clinical gait analysis. The increased use of these techniques in clinical practice requires defined standards for modelling and reporting of foot and ankle kinematics. This systematic review aims to provide a critical appraisal of commonly used foot and ankle marker sets designed to assess kinematics and thus provide a theoretical background for the development of modelling standards.
Resumo:
This paper presents an innovative prognostics model based on health state probability estimation embedded in the closed loop diagnostic and prognostic system. To employ an appropriate classifier for health state probability estimation in the proposed prognostic model, the comparative intelligent diagnostic tests were conducted using five different classifiers applied to the progressive fault levels of three faults in HP-LNG pump. Two sets of impeller-rubbing data were employed for the prediction of pump remnant life based on estimation of discrete health state probability using an outstanding capability of SVM and a feature selection technique. The results obtained were very encouraging and showed that the proposed prognosis system has the potential to be used as an estimation tool for machine remnant life prediction in real life industrial applications.
Resumo:
Cities accumulate and distribute vast sets of digital information. Many decision-making and planning processes in councils, local governments and organisations are based on both real-time and historical data. Until recently, only a small, carefully selected subset of this information has been released to the public – usually for specific purposes (e.g. train timetables, release of planning application through websites to name just a few). This situation is however changing rapidly. Regulatory frameworks, such as the Freedom of Information Legislation in the US, the UK, the European Union and many other countries guarantee public access to data held by the state. One of the results of this legislation and changing attitudes towards open data has been the widespread release of public information as part of recent Government 2.0 initiatives. This includes the creation of public data catalogues such as data.gov.au (U.S.), data.gov.uk (U.K.), data.gov.au (Australia) at federal government levels, and datasf.org (San Francisco) and data.london.gov.uk (London) at municipal levels. The release of this data has opened up the possibility of a wide range of future applications and services which are now the subject of intensified research efforts. Previous research endeavours have explored the creation of specialised tools to aid decision-making by urban citizens, councils and other stakeholders (Calabrese, Kloeckl & Ratti, 2008; Paulos, Honicky & Hooker, 2009). While these initiatives represent an important step towards open data, they too often result in mere collections of data repositories. Proprietary database formats and the lack of an open application programming interface (API) limit the full potential achievable by allowing these data sets to be cross-queried. Our research, presented in this paper, looks beyond the pure release of data. It is concerned with three essential questions: First, how can data from different sources be integrated into a consistent framework and made accessible? Second, how can ordinary citizens be supported in easily composing data from different sources in order to address their specific problems? Third, what are interfaces that make it easy for citizens to interact with data in an urban environment? How can data be accessed and collected?
Resumo:
This chapter proposes a conceptual model for optimal development of needed capabilities for the contemporary knowledge economy. We commence by outlining key capability requirements of the 21st century knowledge economy, distinguishing these from those suited to the earlier stages of the knowledge economy. We then discuss the extent to which higher education currently caters to these requirements and then put forward a new model for effective knowledge economy capability learning. The core of this model is the development of an adaptive and adaptable career identity, which is created through a reflective process of career self-management, drawing upon data from the self and the world of work. In turn, career identity drives the individual’s process of skill and knowledge acquisition, including deep disciplinary knowledge. The professional capability learning thus acquired includes disciplinary skill and knowledge sets, generic skills, and also skills for the knowledge economy, including disciplinary agility, social network capability, and enterprise skills. In the final part of this chapter, we envision higher education systems that embrace the model, and suggest steps that could be taken toward making the development of knowledge economy capabilities an integral part of the university experience.
Resumo:
Making the University Matter investigates how academics situate themselves simultaneously in the university and the world and how doing so affects the viability of the university setting. The university stands at the intersection of two sets of interests, needing to be at one with the world while aspiring to stand apart from it. In an era that promises intensified political instability, growing administrative pressures, dwindling economic returns and questions about economic viability, lower enrolments and shrinking programs, can the university continue to matter into the future? And if so, in which way? What will help it survive as an honest broker? What are the mechanisms for ensuring its independent voice? Barbie Zelizer brings together some of the leading names in the field of media and communication studies from around the globe to consider a multiplicity of answers from across the curriculum on making the university matter, including critical scholarship, interdisciplinarity, curricular blends of the humanities and social sciences, practical training and policy work. The collection is introduced with an essay by the editor and each section has a brief introduction to contextualise the essays and highlight the issues they raise.
Resumo:
Foreword: In this paper I call upon a praxiological approach. Praxeology (early alteration of praxiology) is the study of human action and conduct. The name praxeology/praxiologyakes is root in praxis, Medieval Latin, from Greek, doing, action, from prassein to do, practice (Merriam-Webster Dictionary). Having been involved in project management education, research and practice for the last twenty years, I have constantly tried to improve and to provide a better understanding/knowledge of the field and related practice, and as a consequence widen and deepen the competencies of the people I was working with (and my own competencies as well!), assuming that better project management lead to more efficient and effective use of resources, development of people and at the end to a better world. For some time I have perceived a need to clarify the foundations of the discipline of project management, or at least elucidate what these foundations could be. An immodest task, one might say! But not a neutral one! I am constantly surprised by the way the world (i.e., organizations, universities, students and professional bodies) sees project management: as a set of methods, techniques, tools, interacting with others fields – general management, engineering, construction, information systems, etc. – bringing some effective ways of dealing with various sets of problems – from launching a new satellite to product development through to organizational change.
Resumo:
This paper draws on the work of the ‘EU Kids Online’ network funded by the EC (DG Information Society) Safer Internet plus Programme (project code SIP-KEP-321803); see www.eukidsonline.net, and addresses Australian children’s online activities in terms of risk, harm and opportunity. In particular, it draws upon data that indicates that Australian children are more likely to encounter online risks — especially around seeing sexual images, bullying, misuse of personal data and exposure to potentially harmful user-generated content — than is the case with their EU counterparts. Rather than only comparing Australian children with their European equivalents, this paper places the risks experienced by Australian children in the context of the mediation and online protection practices adopted by their parents, and asks about the possible ways in which we might understand data that seems to indicate that Australian children’s experiences of online risk and harm differ significantly from the experiences of their Europe-based peers. In particular, and as an example, this paper sets out to investigate the apparent conundrum through which Australian children appear twice as likely as most European children to have seen sexual images in the past 12 months, but parents are more likely to filter their access to the internet than is the case with most children in the wider EU Kids Online study. Even so, one in four Australian children (25%) believes that what their parents do helps ‘a lot’ to improve their internet experience, and Australian children and their parents are a little less likely to agree about the mediation practices taking place in the family home than is the case in the EU. The AU Kids Online study was carried out as a result of the ARC Centre of Excellence for Creative Industries and Innovation’s funding of a small scale randomised sample (N = 400) of Australian families with at least one child, aged 9–16, who goes online. The report on Risks and safety for Australian children on the internet follows the same format and uses much of the contextual statement around these issues as the ‘county level’ reports produced by the 25 EU nations involved in EU Kids Online, first drafted by Livingstone et al. (2010). The entirely new material is the data itself, along with the analysis of that data.
Resumo:
The participation of the community broadcasting sector in the development of digital radio provides a potentially valuable opportunity for non-market, end user-driven experimentation in the development of these new services in Australia. However this development path is constrained by various factors, some of which are specific to the community broadcasting sector and others that are generic to the broader media and communications policy, industrial and technological context. This paper filters recent developments in digital radio policy and implementation through the perspectives of community radio stakeholders, obtained through interviews, to describe and analyse these constraints. The early stage of digital community radio presented here is intended as a baseline for tracking the development of the sector as digital radio broadcasting develops. We also draw upon insights from scholarly debates about citizens media and participatory culture to identify and discuss two sets of opportunities for social benefit that are enabled by the inclusion of community radio in digital radio service development. The first arises from community broadcasting’s involvement in the propagation of the multi-literacies that drive new digital economies, not only through formal and informal multi- and trans-media training, but also in the ‘co-creative’ forms of collaborative and participatory media production that are fostered in the sector. The second arises from the fact that community radio is uniquely placed — indeed charged with the responsibility — to facilitate social participation in the design and operation of media institutions themselves, not just their service outputs.
Resumo:
The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.
Resumo:
In this video, a male voice recites a script comprised entirely of jokes. Words flash on screen in time with the spoken words. Sometimes the two sets of words match, and sometimes they differ. This work examines processes of signification. It emphasizes disruption and disconnection as fundamental and generative operations in making meaning. Extending on post-structural and deconstructionist ideas, this work questions the relationship between written and spoken words. By deliberately confusing the signifying structures of jokes and narratives, it questions the sites and mechanisms of comprehension, humour and signification.
Resumo:
In this video, a male voice recites a teenage love poem. Words flash on screen in time with the spoken words. Sometimes the two sets of words match, and sometimes they differ. This work examines processes of signification. It emphasizes disruption and disconnection as fundamental and generative operations in making meaning. Extending on post-structural and deconstructionist ideas, this work questions the relationship between written and spoken words. By actively disrupting the sincerity of a teenage love poem, it questions the sites and mechanisms of comprehension, poetry and signification.
Resumo:
Driver response (reaction) time (tr) of the second queuing vehicle is generally longer than other vehicles at signalized intersections. Though this phenomenon was revealed in 1972, the above factor is still ignored in conventional departure models. This paper highlights the need for quantitative measurements and analysis of queuing vehicle performance in spontaneous discharge pattern because it can improve microsimulation. Video recording from major cities in Australia plus twenty two sets of vehicle trajectories extracted from the Next Generation Simulation (NGSIM) Peachtree Street Dataset have been analyzed to better understand queuing vehicle performance in the discharge process. Findings from this research will alleviate driver response time and also can be used for the calibration of the microscopic traffic simulation model.