981 resultados para domain experts


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report describes a domain independent reasoning system. The system uses a frame-based knowledge representation language and various reasoning techniques including constraint propagation, progressive refinement, natural deduction and explicit control of reasoning. A computational architecture based on active objects which operate by exchanging messages is developed and it is shown how this architecture supports reasoning activity. The user interacts with the system by specifying frames and by giving descriptions defining the problem situation. The system uses its reasoning capacity to build up a model of the problem situation from which a solution can interactively be extracted. Examples are discussed from a variety of domains, including electronic circuits, mechanical devices and music. The main thesis is that a reasoning system is best viewed as a parallel system whose control and data are distributed over a large network of processors that interact by exchanging messages. Such a system will be metaphorically described as a society of communicating experts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An aim of proactive risk management strategies is the timely identification of safety related risks. One way to achieve this is by deploying early warning systems. Early warning systems aim to provide useful information on the presence of potential threats to the system, the level of vulnerability of a system, or both of these, in a timely manner. This information can then be used to take proactive safety measures. The United Nation’s has recommended that any early warning system need to have four essential elements, which are the risk knowledge element, a monitoring and warning service, dissemination and communication and a response capability. This research deals with the risk knowledge element of an early warning system. The risk knowledge element of an early warning system contains models of possible accident scenarios. These accident scenarios are created by using hazard analysis techniques, which are categorised as traditional and contemporary. The assumption in traditional hazard analysis techniques is that accidents are occurred due to a sequence of events, whereas, the assumption of contemporary hazard analysis techniques is that safety is an emergent property of complex systems. The problem is that there is no availability of a software editor which can be used by analysts to create models of accident scenarios based on contemporary hazard analysis techniques and generate computer code that represent the models at the same time. This research aims to enhance the process of generating computer code based on graphical models that associate early warning signs and causal factors to a hazard, based on contemporary hazard analyses techniques. For this purpose, the thesis investigates the use of Domain Specific Modeling (DSM) technologies. The contributions of this thesis is the design and development of a set of three graphical Domain Specific Modeling languages (DSML)s, that when combined together, provide all of the necessary constructs that will enable safety experts and practitioners to conduct hazard and early warning analysis based on a contemporary hazard analysis approach. The languages represent those elements and relations necessary to define accident scenarios and their associated early warning signs. The three DSMLs were incorporated in to a prototype software editor that enables safety scientists and practitioners to create and edit hazard and early warning analysis models in a usable manner and as a result to generate executable code automatically. This research proves that the DSM technologies can be used to develop a set of three DSMLs which can allow user to conduct hazard and early warning analysis in more usable manner. Furthermore, the three DSMLs and their dedicated editor, which are presented in this thesis, may provide a significant enhancement to the process of creating the risk knowledge element of computer based early warning systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The suggestion that the general economy of power in our societies is becoming a domain of security was made by Michel Foucault in the late 1970s. This paper takes inspiration from Foucault?s work to interpret human rights as technologies of governmentality, which make possible the safe and secure society. I examine, by way of illustration, the site of the European Union and its use of new modes of governance to regulate rights discourse – in particular via the emergence of a new Fundamental Rights Agency. „Governance? in the EU is constructed in an apolitical way, as a departure from traditional legal and juridical methods of governing. I argue, however, that the features of governance represent technologies of government(ality), a new form of both being governed through rights and of governing rights. The governance feature that this article is most interested in is experts. The article aims to show, first and foremost, how rights operate as technologies of governmentality via a new relation to expertise. Second, it considers the significant implications that this reading of rights has for rights as a regulatory and normalising discourse. Finally, it highlights how the overlap between rights and governance discourses can be problematic because (as the EU model illustrates) governance conceals the power relations of governmentality, allowing, for instance, the unproblematic representation of the EU as an international human rights actor

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: This document identifies the challenges and opportunities in applying the ontology technology in the Human Resources domain. Target users: A reference for both the HR and the ontology communities. Also, to be used as a roadmap for the OOA itself, within the HR domain. Background: During the discussion panel at the OOA kick-off workshop, which was attended by more than 50 HR and ontology experts, the need for this roadmap was realized. It was obvious that the current understanding of the problem of semantics in HR is fragmented and only partial solutions exist. People from both the HR and the ontology communities speak different languages, have different understandings, and are not aware of existing solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Participatory models are replacing the traditional models of experts and expertise that are based on individuals, their credentials and domain experience. The Wikipedia is a well-known and popular online encyclopedia, built, edited and administrated by lay citizens rather than traditional experts. It
utilises a Web-based participatory model of experts and expertise to enable knowledge contributions and provide administration. While much has been written about the Wikipedia and its merits and pitfalls, there are important ethical challenges stemming from the underlying Wikipedia model. Ethical concerns are likely to be important to Wikipedia users, however as yet, such
concerns have not been systematically explored. By reviewing and synthesising existing literature, this paper identifies six key ethical challenges for existing and potential Wikipedia users, stemming
from the underlying Web-based participatory model of experts and expertise. Important implications arising from the findings are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Health literacy, defined as an individual's capacity to process health information in order to make appropriate health decisions, is the focus of increasing attention in medical fields due to growing awareness that suboptimal health literacy is associated with poorer health outcomes. To explore this issue, a number of instruments, reported to have high internal consistency and strong correlations with general literacy tests, have been developed. However, their validity as measures of the target construct is seldom explored using multiple sources of evidence. The current study, involving collaboration between health professionals and language specialists, set out to assess the validity of the Rapid Estimate of Adult Literacy in Medicine (REALM), which describes itself as a “reading recognition” test that measures ability to pronounce common medical and lay terms. Drawing on a sample of 310 respondents, including both native and non-native speakers of English, investigations were undertaken to probe the REALM's validity as a measure of understanding the selected terms and to consider associations between scores on this widely used test and those derived from other recognized health literacy tests. Results suggest that the REALM is underrepresenting the health literacy construct and that the test may also be biased against non-native speakers of English. The study points to an expanded role for language testers, working in collaboration with experts from medical disciplines, in developing and evaluating health literacy tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective behind building domain-specific visual languages (DSVLs) is to provide users with the most appropriate concepts and notations that best fit with their domain and experience. However, the existing DSVL designers do not support integrating environment and user context information when modeling, editing or viewing DSVL models at different locations, permissions, devices, etc. In this paper, we introduce HorusCML, a context-aware DSVL designer, which supports DSVL experts in integrating necessary context details within their DSVLs. The resultant DSVLs can reflect different facets, layouts, and behaviours according to context it is used in. We show a case study on developing a context-aware data flow diagram DSVL tool using HorusCML.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Osteoarticular allograft transplantation is a popular treatment method in wide surgical resections with large defects. For this reason hospitals are building bone data banks. Performing the optimal allograft selection on bone banks is crucial to the surgical outcome and patient recovery. However, current approaches are very time consuming hindering an efficient selection. We present an automatic method based on registration of femur bones to overcome this limitation. We introduce a new regularization term for the log-domain demons algorithm. This term replaces the standard Gaussian smoothing with a femur specific polyaffine model. The polyaffine femur model is constructed with two affine (femoral head and condyles) and one rigid (shaft) transformation. Our main contribution in this paper is to show that the demons algorithm can be improved in specific cases with an appropriate model. We are not trying to find the most optimal polyaffine model of the femur, but the simplest model with a minimal number of parameters. There is no need to optimize for different number of regions, boundaries and choice of weights, since this fine tuning will be done automatically by a final demons relaxation step with Gaussian smoothing. The newly developed synthesis approach provides a clear anatomically motivated modeling contribution through the specific three component transformation model, and clearly shows a performance improvement (in terms of anatomical meaningful correspondences) on 146 CT images of femurs compared to a standard multiresolution demons. In addition, this simple model improves the robustness of the demons while preserving its accuracy. The ground truth are manual measurements performed by medical experts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: In fast ball sports like beach volleyball, decision-making skills are a determining factor for excellent performance. The current investigation aimed to identify factors that influence the decisionmaking process in top-level beach volleyball defense in order to find relevant aspects for further research. For this reason, focused interviews with top players in international beach volleyball were conducted and analyzed with respect to decision-making characteristics. Design: Nineteen world-tour beach volleyball defense players, including seven Olympic or world champions, were interviewed, focusing on decision-making factors, gaze behavior, and interactions between the two. Methods: Verbal data were analyzed by inductive content analysis according to Mayring (2008). This approach allows categories to emerge from the interview material itself instead of forcing data into preset classifications and theoretical concepts. Results: The data analysis showed that, for top-level beach volleyball defense, decision making depends on opponent specifics, external context, situational context, opponent's movements, and intuition. Information on gaze patterns and visual cues revealed general tendencies indicating optimal gaze strategies that support excellent decision making. Furthermore, the analysis highlighted interactions between gaze behavior, visual information, and domain-specific knowledge. Conclusions: The present findings provide information on visual perception, domain-specific knowledge, and interactions between the two that are relevant for decision making in top-level beach volleyball defense. The results can be used to inform sports practice and to further untangle relevant mechanisms underlying decision making in complex game situations.