149 resultados para Topology-based methods
Resumo:
Evidence-based Practice (EBP) has recently emerged as a topic of discussion amongst professionals within the library and information services (LIS) industry. Simply stated, EBP is the process of using formal research skills and methods to assist in decision making and establishing best practice. The emerging interest in EBP within the library context serves to remind the library profession that research skills and methods can help ensure that the library industry remains current and relevant in changing times. The LIS sector faces ongoing challenges in terms of the expectation that financial and human resources will be managed efficiently, particularly if library budgets are reduced and accountability to the principal stakeholders is increased. Library managers are charged with the responsibility to deliver relevant and cost effective services, in an environment characterised by rapidly changing models of information provision, information access and user behaviours. Consequently they are called upon not only to justify the services they provide, or plan to introduce, but also to measure the effectiveness of these services and to evaluate the impact on the communities they serve. The imperative for innovation in and enhancements to library practice is accompanied by the need for a strong understanding of the processes of review, measurement, assessment and evaluation. In 2001 the Centre for Information Research was commissioned by the Chartered Institute of Library and Information Professionals (CILIP) in the UK to conduct an examination into the research landscape for library and information science. The examination concluded that research is “important for the LIS [library and information science] domain in a number of ways” (McNicol & Nankivell, 2001, p.77). At the professional level, research can inform practice, assist in the future planning of the profession, raise the profile of the discipline, and indeed the reputation and standing of the library and information service itself. At the personal level, research can “broaden horizons and offer individuals development opportunities” (McNicol & Nankivell, 2001, p.77). The study recommended that “research should be promoted as a valuable professional activity for practitioners to engage in” (McNicol & Nankivell, 2001, p.82). This chapter will consider the role of EBP within the library profession. A brief review of key literature in the area is provided. The review considers issues of definition and terminology, highlights the importance of research in professional practice and outlines the research approaches that underpin EBP. The chapter concludes with a consideration of the specific application of EBP within the dynamic and evolving field of information literacy (IL).
Resumo:
The enhanced accessibility, affordability and capability of the Internet has created enormous possibilities in terms of designing, developing and implementing innovative teaching methods in the classroom. As existing pedagogies are revamped and new ones are added, there is a need to assess the effectiveness of these approaches from the students’ perspective. For more than three decades, proven qualitative and quantitative research methods associated with learning environments research have yielded productive results for educators. This article presents the findings of a study in which Getsmart, a teacher-designed website, was blended into science and physics lessons at an Australian high school. Students’ perceptions of this environment were investigated, together with differences in the perceptions of students in junior and senior years of schooling. The article also explores the impact of teachers in such an environment. The investigation undertaken in this study also gave an indication of how effective Getsmart was as a teaching model in such environments.
Resumo:
A surface plasmon resonance-based solution affinity assay is described for measuring the Kd of binding of heparin/heparan sulfate-binding proteins with a variety of ligands. The assay involves the passage of a pre-equilibrated solution of protein and ligand over a sensor chip onto which heparin has been immobilised. Heparin sensor chips prepared by four different methods, including biotin–streptavidin affinity capture and direct covalent attachment to the chip surface, were successfully used in the assay and gave similar Kd values. The assay is applicable to a wide variety of heparin/HS-binding proteins of diverse structure and function (e.g., FGF-1, FGF-2, VEGF, IL-8, MCP-2, ATIII, PF4) and to ligands of varying molecular weight and degree of sulfation (e.g., heparin, PI-88, sucrose octasulfate, naphthalene trisulfonate) and is thus well suited for the rapid screening of ligands in drug discovery applications.
Resumo:
To navigate successfully in a previously unexplored environment, a mobile robot must be able to estimate the spatial relationships of the objects of interest accurately. A Simultaneous Localization and Mapping (SLAM) sys- tem employs its sensors to build incrementally a map of its surroundings and to localize itself in the map simultaneously. The aim of this research project is to develop a SLAM system suitable for self propelled household lawnmowers. The proposed bearing-only SLAM system requires only an omnidirec- tional camera and some inexpensive landmarks. The main advantage of an omnidirectional camera is the panoramic view of all the landmarks in the scene. Placing landmarks in a lawn field to define the working domain is much easier and more flexible than installing the perimeter wire required by existing autonomous lawnmowers. The common approach of existing bearing-only SLAM methods relies on a motion model for predicting the robot’s pose and a sensor model for updating the pose. In the motion model, the error on the estimates of object positions is cumulated due mainly to the wheel slippage. Quantifying accu- rately the uncertainty of object positions is a fundamental requirement. In bearing-only SLAM, the Probability Density Function (PDF) of landmark position should be uniform along the observed bearing. Existing methods that approximate the PDF with a Gaussian estimation do not satisfy this uniformity requirement. This thesis introduces both geometric and proba- bilistic methods to address the above problems. The main novel contribu- tions of this thesis are: 1. A bearing-only SLAM method not requiring odometry. The proposed method relies solely on the sensor model (landmark bearings only) without relying on the motion model (odometry). The uncertainty of the estimated landmark positions depends on the vision error only, instead of the combination of both odometry and vision errors. 2. The transformation of the spatial uncertainty of objects. This thesis introduces a novel method for translating the spatial un- certainty of objects estimated from a moving frame attached to the robot into the global frame attached to the static landmarks in the environment. 3. The characterization of an improved PDF for representing landmark position in bearing-only SLAM. The proposed PDF is expressed in polar coordinates, and the marginal probability on range is constrained to be uniform. Compared to the PDF estimated from a mixture of Gaussians, the PDF developed here has far fewer parameters and can be easily adopted in a probabilistic framework, such as a particle filtering system. The main advantages of our proposed bearing-only SLAM system are its lower production cost and flexibility of use. The proposed system can be adopted in other domestic robots as well, such as vacuum cleaners or robotic toys when terrain is essentially 2D.
Resumo:
Purpose: Worldwide, the incidence of thick melanoma has not declined, and the nodular melanoma (NM) subtype accounts for nearly 40% of newly-diagnosed thick melanoma. To assess differences between patients with thin (≤2.00 mm) and thick (≥2.01 mm) nodular melanoma, we evaluated factors such as demographics, melanoma detection patterns, tumor visibility, and physician screening for NM alone and compared clinical presentation and anatomic location of NM with superficial spreading melanoma (SSM). Methods We utilized data from a large population-based study of Queensland (Australia) residents diagnosed with melanoma. Queensland residents aged 20 to 75 years with histologically confirmed first primary invasive cutaneous melanoma were eligible for the study, and all questionnaires were conducted by telephone (response rate 77.9%). Results During this four-year period, 369 patients with nodular melanoma were interviewed, of whom 56.7% were diagnosed with tumors ≤ 2.00 mm. Men, older individuals, and those who had not been screened by a physician in the past three years were more likely to have nodular tumors of greater thickness. Thickest nodular melanoma (4 mm+) was also most common in persons who had not been screened by a doctor within the past three years (OR 3.75; 95% CI 1.47-9.59). Forty-six percent of patients with thin nodular melanoma (≤ 2.00 mm) reported a change in color, compared with 64% of patients with thin SSM and 26% of patients with thick nodular melanoma (>2.00 mm). Conclusion Awareness of factors related to earlier detection of potentially fatal nodular melanomas, including the benefits of a physician examination, should be useful in enhancing public and professional education strategies. Particular awareness of clinical warning signs associated with thin nodular melanoma should allow for more prompt diagnosis and treatment of this subtype.
Resumo:
This article explores two matrix methods to induce the ``shades of meaning" (SoM) of a word. A matrix representation of a word is computed from a corpus of traces based on the given word. Non-negative Matrix Factorisation (NMF) and Singular Value Decomposition (SVD) compute a set of vectors corresponding to a potential shade of meaning. The two methods were evaluated based on loss of conditional entropy with respect to two sets of manually tagged data. One set reflects concepts generally appearing in text, and the second set comprises words used for investigations into word sense disambiguation. Results show that for NMF consistently outperforms SVD for inducing both SoM of general concepts as well as word senses. The problem of inducing the shades of meaning of a word is more subtle than that of word sense induction and hence relevant to thematic analysis of opinion where nuances of opinion can arise.
Resumo:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
Resumo:
One of the new challenges in aeronautics is combining and accounting for multiple disciplines while considering uncertainties or variability in the design parameters or operating conditions. This paper describes a methodology for robust multidisciplinary design optimisation when there is uncertainty in the operating conditions. The methodology, which is based on canonical evolution algorithms, is enhanced by its coupling with an uncertainty analysis technique. The paper illustrates the use of this methodology on two practical test cases related to Unmanned Aerial Systems (UAS). These are the ideal candidates due to the multi-physics involved and the variability of missions to be performed. Results obtained from the optimisation show that the method is effective to find useful Pareto non-dominated solutions and demonstrate the use of robust design techniques.
Resumo:
Mobile robots are widely used in many industrial fields. Research on path planning for mobile robots is one of the most important aspects in mobile robots research. Path planning for a mobile robot is to find a collision-free route, through the robot’s environment with obstacles, from a specified start location to a desired goal destination while satisfying certain optimization criteria. Most of the existing path planning methods, such as the visibility graph, the cell decomposition, and the potential field are designed with the focus on static environments, in which there are only stationary obstacles. However, in practical systems such as Marine Science Research, Robots in Mining Industry, and RoboCup games, robots usually face dynamic environments, in which both moving and stationary obstacles exist. Because of the complexity of the dynamic environments, research on path planning in the environments with dynamic obstacles is limited. Limited numbers of papers have been published in this area in comparison with hundreds of reports on path planning in stationary environments in the open literature. Recently, a genetic algorithm based approach has been introduced to plan the optimal path for a mobile robot in a dynamic environment with moving obstacles. However, with the increase of the number of the obstacles in the environment, and the changes of the moving speed and direction of the robot and obstacles, the size of the problem to be solved increases sharply. Consequently, the performance of the genetic algorithm based approach deteriorates significantly. This motivates the research of this work. This research develops and implements a simulated annealing algorithm based approach to find the optimal path for a mobile robot in a dynamic environment with moving obstacles. The simulated annealing algorithm is an optimization algorithm similar to the genetic algorithm in principle. However, our investigation and simulations have indicated that the simulated annealing algorithm based approach is simpler and easier to implement. Its performance is also shown to be superior to that of the genetic algorithm based approach in both online and offline processing times as well as in obtaining the optimal solution for path planning of the robot in the dynamic environment. The first step of many path planning methods is to search an initial feasible path for the robot. A commonly used method for searching the initial path is to randomly pick up some vertices of the obstacles in the search space. This is time consuming in both static and dynamic path planning, and has an important impact on the efficiency of the dynamic path planning. This research proposes a heuristic method to search the feasible initial path efficiently. Then, the heuristic method is incorporated into the proposed simulated annealing algorithm based approach for dynamic robot path planning. Simulation experiments have shown that with the incorporation of the heuristic method, the developed simulated annealing algorithm based approach requires much shorter processing time to get the optimal solutions in the dynamic path planning problem. Furthermore, the quality of the solution, as characterized by the length of the planned path, is also improved with the incorporated heuristic method in the simulated annealing based approach for both online and offline path planning.
Resumo:
Cultural objects are increasingly generated and stored in digital form, yet effective methods for their indexing and retrieval still remain an important area of research. The main problem arises from the disconnection between the content-based indexing approach used by computer scientists and the description-based approach used by information scientists. There is also a lack of representational schemes that allow the alignment of the semantics and context with keywords and low-level features that can be automatically extracted from the content of these cultural objects. This paper presents an integrated approach to address these problems, taking advantage of both computer science and information science approaches. We firstly discuss the requirements from a number of perspectives: users, content providers, content managers and technical systems. We then present an overview of our system architecture and describe various techniques which underlie the major components of the system. These include: automatic object category detection; user-driven tagging; metadata transform and augmentation, and an expression language for digital cultural objects. In addition, we discuss our experience on testing and evaluating some existing collections, analyse the difficulties encountered and propose ways to address these problems.
Resumo:
Migraine is a painful disorder for which the etiology remains obscure. Diagnosis is largely based on International Headache Society criteria. However, no feature occurs in all patients who meet these criteria, and no single symptom is required for diagnosis. Consequently, this definition may not accurately reflect the phenotypic heterogeneity or genetic basis of the disorder. Such phenotypic uncertainty is typical for complex genetic disorders and has encouraged interest in multivariate statistical methods for classifying disease phenotypes. We applied three popular statistical phenotyping methods—latent class analysis, grade of membership and grade of membership “fuzzy” clustering (Fanny)—to migraine symptom data, and compared heritability and genome-wide linkage results obtained using each approach. Our results demonstrate that different methodologies produce different clustering structures and non-negligible differences in subsequent analyses. We therefore urge caution in the use of any single approach and suggest that multiple phenotyping methods be used.
Resumo:
This study, in its exploration of the attached play scripts and their method of development, evaluates the forms, strategies, and methods of an organised model of formalised playwriting. Through the examination, reflection and reaction to a perceived crisis in playwriting in the Australian theatre sector, the notion of Industrial Playwriting is arrived at: a practice whereby plays are designed and constructed, and where the process of writing becomes central to the efficient creation of new work and the improvement of the writer’s skill and knowledge base. Using a practice-led methodology and action research the study examines a system of play construction appropriate to and addressing the challenges of the contemporary Australian theatre sector. Specifically, using the action research methodology known as design-based research a conceptual framework was constructed to form the basis of the notion of Industrial Playwriting. From this two plays were constructed using a case study method and the process recorded and used to create a practical, step-by-step system of Industrial Playwriting. In the creative practice of manufacturing a single authored play, and then a group-devised play, Industrial Playwriting was tested and found to also offer a valid alternative approach to playwriting in the training of new and even emerging playwrights. Finally, it offered insight into how Industrial Playwriting could be used to greatly facilitate theatre companies’ ongoing need to have access to new writers and new Australian works, and how it might form the basis of a cost effective writer development model. This study of the methods of formalised writing as a means to confront some of the challenges of the Australian theatre sector, the practice of playwriting and the history associated with it, makes an original and important contribution to contemporary playwriting practice.
Resumo:
This paper presents a high voltage pulsed power system based on low voltage switch-capacitor units connected to a current source for several applications such as plasma systems. A buck-boost converter topology is used to utilize the current source and a series of low voltage switch-capacitor units is connected to the current source in order to provide high voltage with high voltage stress (dv/dt) as demanded by loads. This pulsed power converter is flexible in terms of energy control, in that the stored energy in the current source can be adjusted by changing the current magnitude to significantly improve the efficiency of various systems with different requirements. Output voltage magnitude and stress (dv/dt) can be controlled by a proper selection of components and control algorithm to turn on and off switching devices.
Resumo:
A new steady state method for determination of the electron diffusion length in dye-sensitized solar cells (DSCs) is described and illustrated with data obtained using cells containing three different types of electrolyte. The method is based on using near-IR absorbance methods to establish pairs of illumination intensity for which the total number of trapped electrons is the same at open circuit (where all electrons are lost by interfacial electron transfer) as at short circuit (where the majority of electrons are collected at the contact). Electron diffusion length values obtained by this method are compared with values derived by intensity modulated methods and by impedance measurements under illumination. The results indicate that the values of electron diffusion length derived from the steady state measurements are consistently lower than the values obtained by the non steady-state methods. For all three electrolytes used in the study, the electron diffusion length was sufficiently high to guarantee electron collection efficiencies greater than 90%. Measurement of the trap distributions by near-IR absorption confirmed earlier observations of much higher electron trap densities for electrolytes containing Li+ ions. It is suggested that the electron trap distributions may not be intrinsic properties of the TiO2 nanoparticles, but may be associated with electron-ion interactions.
Resumo:
Appearance-based mapping and localisation is especially challenging when separate processes of mapping and localisation occur at different times of day. The problem is exacerbated in the outdoors where continuous change in sun angle can drastically affect the appearance of a scene. We confront this challenge by fusing the probabilistic local feature based data association method of FAB-MAP with the pose cell filtering and experience mapping of RatSLAM. We evaluate the effectiveness of our amalgamation of methods using five datasets captured throughout the day from a single camera driven through a network of suburban streets. We show further results when the streets are re-visited three weeks later, and draw conclusions on the value of the system for lifelong mapping.