919 resultados para data-driven Stochastic Subspace Identification (SSI-data)


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper addresses the problem of fully-automatic localization and segmentation of 3D intervertebral discs (IVDs) from MR images. Our method contains two steps, where we first localize the center of each IVD, and then segment IVDs by classifying image pixels around each disc center as foreground (disc) or background. The disc localization is done by estimating the image displacements from a set of randomly sampled 3D image patches to the disc center. The image displacements are estimated by jointly optimizing the training and test displacement values in a data-driven way, where we take into consideration both the training data and the geometric constraint on the test image. After the disc centers are localized, we segment the discs by classifying image pixels around disc centers as background or foreground. The classification is done in a similar data-driven approach as we used for localization, but in this segmentation case we are aiming to estimate the foreground/background probability of each pixel instead of the image displacements. In addition, an extra neighborhood smooth constraint is introduced to enforce the local smoothness of the label field. Our method is validated on 3D T2-weighted turbo spin echo MR images of 35 patients from two different studies. Experiments show that compared to state of the art, our method achieves better or comparable results. Specifically, we achieve for localization a mean error of 1.6-2.0 mm, and for segmentation a mean Dice metric of 85%-88% and a mean surface distance of 1.3-1.4 mm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many of the interesting physics processes to be measured at the LHC have a signature involving one or more isolated electrons. The electron reconstruction and identification efficiencies of the ATLAS detector at the LHC have been evaluated using proton–proton collision data collected in 2011 at √s = 7 TeV and corresponding to an integrated luminosity of 4.7 fb−1. Tag-and-probe methods using events with leptonic decays of W and Z bosons and J/ψ mesons are employed to benchmark these performance parameters. The combination of all measurements results in identification efficiencies determined with an accuracy at the few per mil level for electron transverse energy greater than 30 GeV.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper extends the existing research on real estate investment trust (REIT) operating efficiencies. We estimate a stochastic-frontier panel-data model specifying a translog cost function, covering 1995 to 2003. The results disagree with previous research in that we find little evidence of scale economies and some evidence of scale diseconomies. Moreover, we also generally find smaller inefficiencies than those shown by other REIT studies. Contrary to previous research, the results also show that self-management of a REIT associates with more inefficiency when we measure output with assets. When we use revenue to measure output, selfmanagement associates with less inefficiency. Also contrary with previous research, higher leverage associates with more efficiency. The results further suggest that inefficiency increases over time in three of our four specifications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study was a retrospective design and used secondary data from the National Child Abuse and Neglect Data System (NCANDS), provided by the National Data Archive on Child Abuse and Neglect Family Life Development Center administered by Cornell University. The dataset contained information for the year 2005 on children from birth to 18 years of age. Child abuse and neglect for disabled children, was evaluated in-depth in the present study. Descriptive and statistical analysis was performed using the children with and without disabilities. It was found that children with disabilities have a lower rate of substantiation that likely indicates the interference of reporting due to their handicap. The results of this research demonstrate the important need to teach professionals and laypersons alike on how to recognize and substantiate abuse among disabled children.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Heart failure (CHF) is the most frequent and prognostically severe symptom of aortic stenosis (AS), and the most common indication for surgery. The mainstay of treatment for AS is aortic valve replacement (AVR), and the main indication for an AVR is development of symptomatic disease. ACC/AHA guidelines define severe AS as an aortic valve area (AVA) ≤1cm², but there is little data correlating echocardiogram AVA with the onset of symptomatic CHF. We evaluated the risk of developing CHF with progressively decreasing echocardiographic AVA. We also compared echocardiographic AVA with Jet velocity (V2) and indexed AVA (AVAI) to assess the best predictor of development of symptomatic CHF.^ Methods and Results: This retrospective cohort study evaluated 518 patients with asymptomatic moderate or severe AS from a single community based cardiology practice. A total of 925 echocardiograms were performed over an 11-year period. Each echocardiogram was correlated with concurrent clinical assessments while the investigator was blinded to the echocardiogram severity of AS. The Cox Proportional hazards model was used to analyze the relationship between AVA and the development of CHF. The median age of patients at entry was 76.1 years, with 54% males. A total of 116 patients (21.8%) developed new onset CHF during follow-up. Compared to patients with AVA >1.0cm², patients with lower AVA had an exponentially increasing risk of developing CHF for each 0.2cm² decrement in AVA, becoming statistically significant only at an AVA less than 0.8 cm². Also, compared to V2 and AVAI, AVA added more information to assessing risk for development of CHF (p=0.041). ^ Conclusion: In patients with normal or mildly impaired LVEF, the risk of CHF rises exponentially with decreasing valve area and becomes statistically significant after AVA falls below 0.8cm². AVA is a better predictor of CHF when compared to V2 or AVAI.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Several activities in service oriented computing, such as automatic composition, monitoring, and adaptation, can benefit from knowing properties of a given service composition before executing them. Among these properties we will focus on those related to execution cost and resource usage, in a wide sense, as they can be linked to QoS characteristics. In order to attain more accuracy, we formulate execution costs / resource usage as functions on input data (or appropriate abstractions thereof) and show how these functions can be used to make better, more informed decisions when performing composition, adaptation, and proactive monitoring. We present an approach to, on one hand, synthesizing these functions in an automatic fashion from the definition of the different orchestrations taking part in a system and, on the other hand, to effectively using them to reduce the overall costs of non-trivial service-based systems featuring sensitivity to data and possibility of failure. We validate our approach by means of simulations of scenarios needing runtime selection of services and adaptation due to service failure. A number of rebinding strategies, including the use of cost functions, are compared.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The conformance of semantic technologies has to be systematically evaluated to measure and verify the real adherence of these technologies to the Semantic Web standards. Currente valuations of semantic technology conformance are not exhaustive enough and do not directly cover user requirements and use scenarios, which raises the need for a simple, extensible and parameterizable method to generate test data for such evaluations. To address this need, this paper presents a keyword-driven approach for generating ontology language conformance test data that can be used to evaluate semantic technologies, details the definition of a test suite for evaluating OWL DL conformance using this approach,and describes the use and extension of this test suite during the evaluation of some tools.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the last decade, multi-sensor data fusion has become a broadly demanded discipline to achieve advanced solutions that can be applied in many real world situations, either civil or military. In Defence,accurate detection of all target objects is fundamental to maintaining situational awareness, to locating threats in the battlefield and to identifying and protecting strategically own forces. Civil applications, such as traffic monitoring, have similar requirements in terms of object detection and reliable identification of incidents in order to ensure safety of road users. Thanks to the appropriate data fusion technique, we can give these systems the power to exploit automatically all relevant information from multiple sources to face for instance mission needs or assess daily supervision operations. This paper focuses on its application to active vehicle monitoring in a particular area of high density traffic, and how it is redirecting the research activities being carried out in the computer vision, signal processing and machine learning fields for improving the effectiveness of detection and tracking in ground surveillance scenarios in general. Specifically, our system proposes fusion of data at a feature level which is extracted from a video camera and a laser scanner. In addition, a stochastic-based tracking which introduces some particle filters into the model to deal with uncertainty due to occlusions and improve the previous detection output is presented in this paper. It has been shown that this computer vision tracker contributes to detect objects even under poor visual information. Finally, in the same way that humans are able to analyze both temporal and spatial relations among items in the scene to associate them a meaning, once the targets objects have been correctly detected and tracked, it is desired that machines can provide a trustworthy description of what is happening in the scene under surveillance. Accomplishing so ambitious task requires a machine learning-based hierarchic architecture able to extract and analyse behaviours at different abstraction levels. A real experimental testbed has been implemented for the evaluation of the proposed modular system. Such scenario is a closed circuit where real traffic situations can be simulated. First results have shown the strength of the proposed system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Context. This thesis is framed in experimental software engineering. More concretely, it addresses the problems arisen when assessing process conformance in test-driven development experiments conducted by UPM's Experimental Software Engineering group. Process conformance was studied using the Eclipse's plug-in tool Besouro. It has been observed that Besouro does not work correctly in some circumstances. It creates doubts about the correction of the existing experimental data which render it useless. Aim. The main objective of this work is the identification and correction of Besouro's faults. A secondary goal is fixing the datasets already obtained in past experiments to the maximum possible extent. This way, existing experimental results could be used with confidence. Method. (1) Testing Besouro using different sequences of events (creation methods, assertions etc..) to identify the underlying faults. (2) Fix the code and (3) fix the datasets using code specially created for this purpose. Results. (1) We confirmed the existence of several fault in Besouro's code that affected to Test-First and Test-Last episode identification. These faults caused the incorrect identification of 20% of episodes. (2) We were able to fix Besouro's code. (3) The correction of existing datasets was possible, subjected to some restrictions (such us the impossibility of tracing code size increase to programming time. Conclusion. The results of past experiments dependent upon Besouro's data could no be trustable. We have the suspicion that more faults remain in Besouro's code, whose identification requires further analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Protein Information Resource, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the most comprehensive and expertly annotated protein sequence database in the public domain, the PIR-International Protein Sequence Database. To provide timely and high quality annotation and promote database interoperability, the PIR-International employs rule-based and classification-driven procedures based on controlled vocabulary and standard nomenclature and includes status tags to distinguish experimentally determined from predicted protein features. The database contains about 200 000 non-redundant protein sequences, which are classified into families and superfamilies and their domains and motifs identified. Entries are extensively cross-referenced to other sequence, classification, genome, structure and activity databases. The PIR web site features search engines that use sequence similarity and database annotation to facilitate the analysis and functional identification of proteins. The PIR-Inter­national databases and search tools are accessible on the PIR web site at http://pir.georgetown.edu/ and at the MIPS web site at http://www.mips.biochem.mpg.de. The PIR-International Protein Sequence Database and other files are also available by FTP.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Molecular and fragment ion data of intact 8- to 43-kDa proteins from electrospray Fourier-transform tandem mass spectrometry are matched against the corresponding data in sequence data bases. Extending the sequence tag concept of Mann and Wilm for matching peptides, a partial amino acid sequence in the unknown is first identified from the mass differences of a series of fragment ions, and the mass position of this sequence is defined from molecular weight and the fragment ion masses. For three studied proteins, a single sequence tag retrieved only the correct protein from the data base; a fourth protein required the input of two sequence tags. However, three of the data base proteins differed by having an extra methionine or by missing an acetyl or heme substitution. The positions of these modifications in the protein examined were greatly restricted by the mass differences of its molecular and fragment ions versus those of the data base. To characterize the primary structure of an unknown represented in the data base, this method is fast and specific and does not require prior enzymatic or chemical degradation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In order to protect critical military and commercial space assets, the United States Space Surveillance Network must have the ability to positively identify and characterize all space objects. Unfortunately, positive identification and characterization of space objects is a manual and labor intensive process today since even large telescopes cannot provide resolved images of most space objects. Since resolved images of geosynchronous satellites are not technically feasible with current technology, another method of distinguishing space objects was explored that exploits the polarization signature from unresolved images. The objective of this study was to collect and analyze visible-spectrum polarization data from unresolved images of geosynchronous satellites taken over various solar phase angles. Different collection geometries were used to evaluate the polarization contribution of solar arrays, thermal control materials, antennas, and the satellite bus as the solar phase angle changed. Since materials on space objects age due to the space environment, it was postulated that their polarization signature may change enough to allow discrimination of identical satellites launched at different times. The instrumentation used in this experiment was a United States Air Force Academy (USAFA) Department of Physics system that consists of a 20-inch Ritchey-Chrétien telescope and a dual focal plane optical train fed with a polarizing beam splitter. A rigorous calibration of the system was performed that included corrections for pixel bias, dark current, and response. Additionally, the two channel polarimeter was calibrated by experimentally determining the Mueller matrix for the system and relating image intensity at the two cameras to Stokes parameters S0 and S1. After the system calibration, polarization data was collected during three nights on eight geosynchronous satellites built by various manufacturers and launched several years apart. Three pairs of the eight satellites were identical buses to determine if identical buses could be correctly differentiated. When Stokes parameters were plotted against time and solar phase angle, the data indicates that there were distinguishing features in S0 (total intensity) and S1 (linear polarization) that may lead to positive identification or classification of each satellite.