20 resultados para Extraction of BR from Source Code

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatic identification and extraction of bone contours from X-ray images is an essential first step task for further medical image analysis. In this paper we propose a 3D statistical model based framework for the proximal femur contour extraction from calibrated X-ray images. The automatic initialization is solved by an estimation of Bayesian network algorithm to fit a multiple component geometrical model to the X-ray data. The contour extraction is accomplished by a non-rigid 2D/3D registration between a 3D statistical model and the X-ray images, in which bone contours are extracted by a graphical model based Bayesian inference. Preliminary experiments on clinical data sets verified its validity

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rationale: Focal onset epileptic seizures are due to abnormal interactions between distributed brain areas. By estimating the cross-correlation matrix of multi-site intra-cerebral EEG recordings (iEEG), one can quantify these interactions. To assess the topology of the underlying functional network, the binary connectivity matrix has to be derived from the cross-correlation matrix by use of a threshold. Classically, a unique threshold is used that constrains the topology [1]. Our method aims to set the threshold in a data-driven way by separating genuine from random cross-correlation. We compare our approach to the fixed threshold method and study the dynamics of the functional topology. Methods: We investigate the iEEG of patients suffering from focal onset seizures who underwent evaluation for the possibility of surgery. The equal-time cross-correlation matrices are evaluated using a sliding time window. We then compare 3 approaches assessing the corresponding binary networks. For each time window: * Our parameter-free method derives from the cross-correlation strength matrix (CCS)[2]. It aims at disentangling genuine from random correlations (due to finite length and varying frequency content of the signals). In practice, a threshold is evaluated for each pair of channels independently, in a data-driven way. * The fixed mean degree (FMD) uses a unique threshold on the whole connectivity matrix so as to ensure a user defined mean degree. * The varying mean degree (VMD) uses the mean degree of the CCS network to set a unique threshold for the entire connectivity matrix. * Finally, the connectivity (c), connectedness (given by k, the number of disconnected sub-networks), mean global and local efficiencies (Eg, El, resp.) are computed from FMD, CCS, VMD, and their corresponding random and lattice networks. Results: Compared to FMD and VMD, CCS networks present: *topologies that are different in terms of c, k, Eg and El. *from the pre-ictal to the ictal and then post-ictal period, topological features time courses that are more stable within a period, and more contrasted from one period to the next. For CCS, pre-ictal connectivity is low, increases to a high level during the seizure, then decreases at offset. k shows a ‘‘U-curve’’ underlining the synchronization of all electrodes during the seizure. Eg and El time courses fluctuate between the corresponding random and lattice networks values in a reproducible manner. Conclusions: The definition of a data-driven threshold provides new insights into the topology of the epileptic functional networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As more and more open-source software components become available on the internet we need automatic ways to label and compare them. For example, a developer who searches for reusable software must be able to quickly gain an understanding of retrieved components. This understanding cannot be gained at the level of source code due to the semantic gap between source code and the domain model. In this paper we present a lexical approach that uses the log-likelihood ratios of word frequencies to automatically provide labels for software components. We present a prototype implementation of our labeling/comparison algorithm and provide examples of its application. In particular, we apply the approach to detect trends in the evolution of a software system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systems must co-evolve with their context. Reverse engineering tools are a great help in this process of required adaption. In order for these tools to be flexible, they work with models, abstract representations of the source code. The extraction of such information from source code can be done using a parser. However, it is fairly tedious to build new parsers. And this is made worse by the fact that it has to be done over and over again for every language we want to analyze. In this paper we propose a novel approach which minimizes the knowledge required of a certain language for the extraction of models implemented in that language by reflecting on the implementation of preparsed ASTs provided by an IDE. In a second phase we use a technique referred to as Model Mapping by Example to map platform dependent models onto domain specific model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For many years a combined analysis of pionic hydrogen and deuterium atoms has been known as a good tool to extract information on the isovector and especially on the isoscalar s-wave pN scattering length. However, given the smallness of the isoscalar scattering length, the analysis becomes useful only if the pion–deuteron scattering length is controlled theoretically to a high accuracy comparable to the experimental precision. To achieve the required few-percent accuracy one needs theoretical control over all isospin-conserving three-body pNN !pNN operators up to one order before the contribution of the dominant unknown (N†N)2pp contact term. This term appears at next-to-next-to-leading order in Weinberg counting. In addition, one needs to include isospin-violating effects in both two-body (pN) and three-body (pNN) operators. In this talk we discuss the results of the recent analysis where these isospin-conserving and -violating effects have been carefully taken into account. Based on this analysis, we present the up-to-date values of the s-wave pN scattering lengths.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The population of space debris increased drastically during the last years. These objects have become a great threat for active satellites. Because the relative velocities between space debris and satellites are high, space debris objects may destroy active satellites through collisions. Furthermore, collisions involving massive objects produce large number of fragments leading to significant growth of the space debris population. The long term evolution of the debris population is essentially driven by so-called catastrophic collisions. An effective remediation measure in order to stabilize the population in Low Earth Orbit (LEO) is therefore the removal of large, massive space debris. To remove these objects, not only precise orbits, but also more detailed information about their attitude states will be required. One important property of an object targeted for removal is its spin period, spin axis orientation and their change over time. Rotating objects will produce periodic brightness variations with frequencies which are related to the spin periods. Such a brightness variation over time is called a light curve. Collecting, but also processing light curves is challenging due to several reasons. Light curves may be undersampled, low frequency components due to phase angle and atmospheric extinction changes may be present, and beat frequencies may occur when the rotation period is close to a multiple of the sampling period. Depending on the method which is used to extract the frequencies, also method-specific properties have to be taken into account. The astronomical Institute of the University of Bern (AIUB) light curve database will be introduced, which contains more than 1,300 light curves acquired over more than seven years. We will discuss properties and reliability of different time series analysis methods tested and currently used by AIUB for the light curve processing. Extracted frequencies and reconstructed phases for some interesting targets, e.g. GLONASS satellites, for which also SLR data were available for the period confirmation, will be presented. Finally we will present the reconstructed phase and its evolution over time of a High-Area-to-Mass-Ratio (HAMR) object, which AIUB observed for several years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently developed computer applications provide tools for planning cranio-maxillofacial interventions based on 3-dimensional (3D) virtual models of the patient's skull obtained from computed-tomography (CT) scans. Precise knowledge of the location of the mid-facial plane is important for the assessment of deformities and for planning reconstructive procedures. In this work, a new method is presented to automatically compute the mid-facial plane on the basis of a surface model of the facial skeleton obtained from CT. The method matches homologous surface areas selected by the user on the left and right facial side using an iterative closest point optimization. The symmetry plane which best approximates this matching transformation is then computed. This new automatic method was evaluated in an experimental study. The study included experienced and inexperienced clinicians defining the symmetry plane by a selection of landmarks. This manual definition was systematically compared with the definition resulting from the new automatic method: Quality of the symmetry planes was evaluated by their ability to match homologous areas of the face. Results show that the new automatic method is reliable and leads to significantly higher accuracy than the manual method when performed by inexperienced clinicians. In addition, the method performs equally well in difficult trauma situations, where key landmarks are unreliable or absent.