664 resultados para Mathematics - Graphic methods
Resumo:
The space and time fractional Bloch–Torrey equation (ST-FBTE) has been used to study anomalous diffusion in the human brain. Numerical methods for solving ST-FBTE in three-dimensions are computationally demanding. In this paper, we propose a computationally effective fractional alternating direction method (FADM) to overcome this problem. We consider ST-FBTE on a finite domain where the time and space derivatives are replaced by the Caputo–Djrbashian and the sequential Riesz fractional derivatives, respectively. The stability and convergence properties of the FADM are discussed. Finally, some numerical results for ST-FBTE are given to confirm our theoretical findings.
Resumo:
Spreadsheet for Creative City Index 2012
Resumo:
Parametric and generative modelling methods are ways in which computer models are made more flexible, and of formalising domain-specific knowledge. At present, no open standard exists for the interchange of parametric and generative information. The Industry Foundation Classes (IFC) which are an open standard for interoperability in building information models is presented as the base for an open standard in parametric modelling. The advantage of allowing parametric and generative representations are that the early design process can allow for more iteration and changes can be implemented quicker than with traditional models. This paper begins with a formal definition of what constitutes to be parametric and generative modelling methods and then proceeds to describe an open standard in which the interchange of components could be implemented. As an illustrative example of generative design, Frazer’s ‘Reptiles’ project from 1968 is reinterpreted.
Resumo:
Qualitative Health Psychology aims to contribute to the debate about the nature of psychology and of science through ‘an examination of the role of qualitative research within health psychology’ (p. 3). The editors, in bringing together contributors from the UK, Ireland, Canada, Brazil, New Zealand and Australia, have compiled a text that reflects different uses of qualitative health research in diverse social and cultural contexts. Structured into three parts, the book encompasses key theoretical and methodological issues in qualitative research in its attempt to encourage broad epistemological debate within health psychology.
Resumo:
The appearance of poststructuralism as a research methodology in public health literature raises questions about the history and purpose of this research. We examine (a) some aspects of the history of qualitative methods and their place within larger social and research domains, and (b) the purposes of a public health research that employs poststructuralist philosophy delineating the methodological issues that require consideration in positing a poststructural analysis. We argue against poststructuralism becoming a research methodology deployed to seize the pubic health debate, rather than being employed for its own particular critical strengths.
Resumo:
This dissertation analyses how physical objects are translated into digital artworks using techniques which can lead to ‘imperfections’ in the resulting digital artwork that are typically removed to arrive at a ‘perfect’ final representation. The dissertation discusses the adaptation of existing techniques into an artistic workflow that acknowledges and incorporates the imperfections of translation into the final pieces. It presents an exploration of the relationship between physical and digital artefacts and the processes used to move between the two. The work explores the 'craft' of digital sculpting and the technology used in producing what the artist terms ‘a naturally imperfect form’, incorporating knowledge of traditional sculpture, an understanding of anatomy and an interest in the study of bones (Osteology). The outcomes of the research are presented as a series of digital sculptural works, exhibited as a collection of curiosities in multiple mediums, including interactive game spaces, augmented reality (AR), rapid prototype prints (RP) and video displays.
Resumo:
1. Autonomous acoustic recorders are widely available and can provide a highly efficient method of species monitoring, especially when coupled with software to automate data processing. However, the adoption of these techniques is restricted by a lack of direct comparisons with existing manual field surveys. 2. We assessed the performance of autonomous methods by comparing manual and automated examination of acoustic recordings with a field-listening survey, using commercially available autonomous recorders and custom call detection and classification software. We compared the detection capability, time requirements, areal coverage and weather condition bias of these three methods using an established call monitoring programme for a nocturnal bird, the little spotted kiwi(Apteryx owenii). 3. The autonomous recorder methods had very high precision (>98%) and required <3% of the time needed for the field survey. They were less sensitive, with visual spectrogram inspection recovering 80% of the total calls detected and automated call detection 40%, although this recall increased with signal strength. The areal coverage of the spectrogram inspection and automatic detection methods were 85% and 42% of the field survey. The methods using autonomous recorders were more adversely affected by wind and did not show a positive association between ground moisture and call rates that was apparent from the field counts. However, all methods produced the same results for the most important conservation information from the survey: the annual change in calling activity. 4. Autonomous monitoring techniques incur different biases to manual surveys and so can yield different ecological conclusions if sampling is not adjusted accordingly. Nevertheless, the sensitivity, robustness and high accuracy of automated acoustic methods demonstrate that they offer a suitable and extremely efficient alternative to field observer point counts for species monitoring.
Resumo:
Technological growth in the 21st century is exponential. Simultaneously, development of the associated risk, uncertainty and user acceptance are scattered. This required appropriate study to establish people accepting controversial technology (PACT). The Internet and services around it, such as World Wide Web, e-mail, instant messaging and social networking are increasingly becoming important in many aspects of our lives. Information related to medical and personal health sharing using the Internet is controversial and demand validity, usability and acceptance. Whilst literature suggest, Internet enhances patients and physicians’ positive interactions some studies establish opposite of such interaction in particular the associated risk. In recent years Internet has attracted considerable attention as a means to improve health and health care delivery. However, it is not clear how widespread the use of Internet for health care really is or what impact it has on health care utilisation. Estimated impact of Internet usage varies widely from the locations locally and globally. As a result, an estimate (or predication) of Internet use and their effects in Medical Informatics related decision-making is impractical. This open up research issues on validating and accepting Internet usage when designing and developing appropriate policy and processes activities for Medical Informatics, Health Informatics and/or e-Health related protocols. Access and/or availability of data on Internet usage for Medical Informatics related activities are unfeasible. This paper presents a trend analysis of the growth of Internet usage in medical informatics related activities. In order to perform the analysis, data was extracted from ERA (Excellence Research in Australia) ranked “A” and “A*” Journal publications and reports from the authenticated public domain. The study is limited to the analyses of Internet usage trends in United States, Italy, France and Japan. Projected trends and their influence to the field of medical informatics is reviewed and discussed. The study clearly indicates a trend of patients becoming active consumers of health information rather than passive recipients.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Overview: - Development of mixed methods research - Benefits and challenges of “mixing” - Different models - Good design - Two examples - How to report? - Have a go!
Resumo:
Over a seven-year period, Mark Radvan directed a suite of children’s theatre productions adapted from the original Tashi stories by Australian writers Anna and Barbara Fienberg. The Tashi Project’s repertoire of plays performed to over 40,000 children aged between 3 and 10 years old, and their carers, in seasons at the Out of the Box Festival, at Brisbane Powerhouse and in venues across Australia in two interstate tours in 2009 and 2010. The project investigated how best to combine an exploration of theatrical forms and conventions, with a performance style evolved in a specially developed training program and a deliberate positioning of young children as audiences capable of sophisticated readings of action, symbol, theme and character. The results of this project show that when brought into appropriate relationship with the theatre artists, young children aged 3-5 can engage with sophisticated narrative forms, and with the right contextual framing they enjoy heightened dramatic and emotional tension, bringing to the event sustained and highly engaged concentration. Older children aged 6-10 also bring sustained and heightened engagement to the same stories, providing that other more sophisticated dramatic elements are woven into the construction of the performances, such as character, theme and style.
Resumo:
The impact-induced deposition of Al13 clusters with icosahedral structure on Ni(0 0 1) surface was studied by molecular dynamics (MD) simulation using Finnis–Sinclair potentials. The incident kinetic energy (Ein) ranged from 0.01 to 30 eV per atom. The structural and dynamical properties of Al clusters on Ni surfaces were found to be strongly dependent on the impact energy. At much lower energy, the Al cluster deposited on the surface as a bulk molecule. However, the original icosahedral structure was transformed to the fcc-like one due to the interaction and the structure mismatch between the Al cluster and Ni surface. With increasing the impinging energy, the cluster was deformed severely when it contacted the substrate, and then broken up due to dense collision cascade. The cluster atoms spread on the surface at last. When the impact energy was higher than 11 eV, the defects, such as Al substitutions and Ni ejections, were observed. The simulation indicated that there exists an optimum energy range, which is suitable for Al epitaxial growth in layer by layer. In addition, at higher impinging energy, the atomic exchange between Al and Ni atoms will be favourable to surface alloying.
Resumo:
Quantitative market data has traditionally been used throughout marketing and business as a tool to inform and direct design decisions. However, in our changing economic climate, businesses need to innovate and create products their customers will love. Deep customer insight methods move beyond just questioning customers and aims to provoke true emotional responses in order to reveal new opportunities that go beyond functional product requirements. This paper explores traditional market research methods and compares them to methods used to gain deep customer insights. This study reports on a collaborative research project with seven small to medium enterprises and four multi-national organisations. Firms were introduced to a design led innovation approach, and were taught the different methods to gain deep customer insights. Interviews were conducted to understand the experience and outcomes of pre-existing research methods and deep customer insight approaches. Findings concluded that deep customer insights were unlikely to be revealed through traditional market research techniques. The theoretical outcome of this study is a complementary methods matrix, providing guidance on appropriate research methods in accordance to a project’s timeline.
Resumo:
The cardiac catheterisation laboratory (CCL) is a specialised medical radiology facility where both chronic-stable and life-threatening cardiovascular illness is evaluated and treated. Although there are many potential sources of discomfort and distress associated with procedures performed in the CCL, a general anaesthetic is not usually required. For this reason, an anaesthetist is not routinely assigned to the CCL. Instead, to manage pain, discomfort and anxiety during the procedure, nurses administer a combination of sedative and analgesic medications according to direction from the cardiologist performing the procedure. This practice is referred to as nurse-administered procedural sedation and analgesia (PSA). While anecdotal evidence suggested that nurse-administered PSA was commonly used in the CCL, it was clear from the limited information available that current nurse-led PSA administration and monitoring practices varied and that there was contention around some aspects of practice including the type of medications that were suitable to be used and the depth of sedation that could be safely induced without an anaesthetist present. The overall aim of the program of research presented in this thesis was to establish an evidence base for nurse-led sedation practices in the CCL context. A sequential mixed methods design was used over three phases. The objective of the first phase was to appraise the existing evidence for nurse-administered PSA in the CCL. Two studies were conducted. The first study was an integrative review of empirical research studies and clinical practice guidelines focused on nurse-administered PSA in the CCL as well as in other similar procedural settings. This was the first review to systematically appraise the available evidence supporting the use of nurse-administered PSA in the CCL. A major finding was that, overall, nurse-administered PSA in the CCL was generally deemed to be safe. However, it was concluded from the analysis of the studies and the guidelines that were included in the review, that the management of sedation in the CCL was impacted by a variety of contextual factors including local hospital policy, workforce constraints and cardiologists’ preferences for the type of sedation used. The second study in the first phase was conducted to identify a sedation scale that could be used to monitor level of sedation during nurse-administered PSA in the CCL. It involved a structured literature review and psychometric analysis of scale properties. However, only one scale was found that was developed specifically for the CCL, which had not undergone psychometric testing. Several weaknesses were identified in its item structure. Other sedation scales that were identified were developed for the ICU. Although these scales have demonstrated validity and reliability in the ICU, weaknesses in their item structure precluded their use in the CCL. As findings indicated that no existing sedation scale should be applied to practice in the CCL, recommendations for the development and psychometric testing of a new sedation scale were developed. The objective of the second phase of the program of research was to explore current practice. Three studies were conducted in this phase using both quantitative and qualitative research methods. The first was a qualitative explorative study of nurses’ perceptions of the issues and challenges associated with nurse-administered PSA in the CCL. Major themes emerged from analysis of the qualitative data regarding the lack of access to anaesthetists, the limitations of sedative medications, the barriers to effective patient monitoring and the impact that the increasing complexity of procedures has on patients' sedation requirements. The second study in Phase Two was a cross-sectional survey of nurse-administered PSA practice in Australian and New Zealand CCLs. This was the first study to quantify the frequency that nurse-administered PSA was used in the CCL setting and to characterise associated nursing practices. It was found that nearly all CCLs utilise nurse-administered PSA (94%). Of note, by characterising nurse-administered PSA in Australian and New Zealand CCLs, several strategies to improve practice, such as setting up protocols for patient monitoring and establishing comprehensive PSA education for CCL nurses, were identified. The third study in Phase Two was a matched case-control study of risk factors for impaired respiratory function during nurse-administered PSA in the CCL setting. Patients with acute illness were found to be nearly twice as likely to experience impaired respiratory function during nurse-administered PSA (OR=1.78; 95%CI=1.19-2.67; p=0.005). These significant findings can now be used to inform prospective studies investigating the effectiveness of interventions for impaired respiratory function during nurse-administered PSA in the CCL. The objective of the third and final phase of the program of research was to develop recommendations for practice. To achieve this objective, a synthesis of findings from the previous phases of the program of research informed a modified Delphi study, which was conducted to develop a set of clinical practice guidelines for nurse-administered PSA in the CCL. The clinical practice guidelines that were developed set current best practice standards for pre-procedural patient assessment and risk screening practices as well as the intra and post-procedural patient monitoring practices that nurses who administer PSA in the CCL should undertake in order to deliver safe, evidence-based and consistent care to the many patients who undergo procedures in this setting. In summary, the mixed methods approach that was used clearly enabled the research objectives to be comprehensively addressed in an informed sequential manner, and, as a consequence, this thesis has generated a substantial amount of new knowledge to inform and support nurse-led sedation practice in the CCL context. However, a limitation of the research to note is that the comprehensive appraisal of the evidence conducted, combined with the guideline development process, highlighted that there were numerous deficiencies in the evidence base. As such, rather than being based on high-level evidence, many of the recommendations for practice were produced by consensus. For this reason, further research is required in order to ascertain which specific practices result in the most optimal patient and health service outcomes. Therefore, along with necessary guideline implementation and evaluation projects, post-doctoral research is planned to follow up on the research gaps identified, which are planned to form part of a continuing program of research in this field.