861 resultados para Multi Domain Information Model
Resumo:
Osteoarticular allograft transplantation is a popular treatment method in wide surgical resections with large defects. For this reason hospitals are building bone data banks. Performing the optimal allograft selection on bone banks is crucial to the surgical outcome and patient recovery. However, current approaches are very time consuming hindering an efficient selection. We present an automatic method based on registration of femur bones to overcome this limitation. We introduce a new regularization term for the log-domain demons algorithm. This term replaces the standard Gaussian smoothing with a femur specific polyaffine model. The polyaffine femur model is constructed with two affine (femoral head and condyles) and one rigid (shaft) transformation. Our main contribution in this paper is to show that the demons algorithm can be improved in specific cases with an appropriate model. We are not trying to find the most optimal polyaffine model of the femur, but the simplest model with a minimal number of parameters. There is no need to optimize for different number of regions, boundaries and choice of weights, since this fine tuning will be done automatically by a final demons relaxation step with Gaussian smoothing. The newly developed synthesis approach provides a clear anatomically motivated modeling contribution through the specific three component transformation model, and clearly shows a performance improvement (in terms of anatomical meaningful correspondences) on 146 CT images of femurs compared to a standard multiresolution demons. In addition, this simple model improves the robustness of the demons while preserving its accuracy. The ground truth are manual measurements performed by medical experts.
Resumo:
Numerous bacterial pathogens subvert cellular functions of eukaryotic host cells by the injection of effector proteins via dedicated secretion systems. The type IV secretion system (T4SS) effector protein BepA from Bartonella henselae is composed of an N-terminal Fic domain and a C-terminal Bartonella intracellular delivery domain, the latter being responsible for T4SS-mediated translocation into host cells. A proteolysis resistant fragment (residues 10-302) that includes the Fic domain shows autoadenylylation activity and adenylyl transfer onto Hela cell extract proteins as demonstrated by autoradiography on incubation with α-[(32)P]-ATP. Its crystal structure, determined to 2.9-Å resolution by the SeMet-SAD method, exhibits the canonical Fic fold including the HPFxxGNGRxxR signature motif with several elaborations in loop regions and an additional β-rich domain at the C-terminus. On crystal soaking with ATP/Mg(2+), additional electron density indicated the presence of a PP(i) /Mg(2+) moiety, the side product of the adenylylation reaction, in the anion binding nest of the signature motif. On the basis of this information and that of the recent structure of IbpA(Fic2) in complex with the eukaryotic target protein Cdc42, we present a detailed model for the ternary complex of Fic with the two substrates, ATP/Mg(2+) and target tyrosine. The model is consistent with an in-line nucleophilic attack of the deprotonated side-chain hydroxyl group onto the α-phosphorus of the nucleotide to accomplish AMP transfer. Furthermore, a general, sequence-independent mechanism of target positioning through antiparallel β-strand interactions between enzyme and target is suggested.
Resumo:
Optical coherence tomography (OCT) is a well-established image modality in ophthalmology and used daily in the clinic. Automatic evaluation of such datasets requires an accurate segmentation of the retinal cell layers. However, due to the naturally low signal to noise ratio and the resulting bad image quality, this task remains challenging. We propose an automatic graph-based multi-surface segmentation algorithm that internally uses soft constraints to add prior information from a learned model. This improves the accuracy of the segmentation and increase the robustness to noise. Furthermore, we show that the graph size can be greatly reduced by applying a smart segmentation scheme. This allows the segmentation to be computed in seconds instead of minutes, without deteriorating the segmentation accuracy, making it ideal for a clinical setup. An extensive evaluation on 20 OCT datasets of healthy eyes was performed and showed a mean unsigned segmentation error of 3.05 ±0.54 μm over all datasets when compared to the average observer, which is lower than the inter-observer variability. Similar performance was measured for the task of drusen segmentation, demonstrating the usefulness of using soft constraints as a tool to deal with pathologies.
Resumo:
The group analysed some syntactic and phonological phenomena that presuppose the existence of interrelated components within the lexicon, which motivate the assumption that there are some sublexicons within the global lexicon of a speaker. This result is confirmed by experimental findings in neurolinguistics. Hungarian speaking agrammatic aphasics were tested in several ways, the results showing that the sublexicon of closed-class lexical items provides a highly automated complex device for processing surface sentence structure. Analysing Hungarian ellipsis data from a semantic-syntactic aspect, the group established that the lexicon is best conceived of being as split into at least two main sublexicons: the store of semantic-syntactic feature bundles and a separate store of sound forms. On this basis they proposed a format for representing open-class lexical items whose meanings are connected via certain semantic relations. They also proposed a new classification of verbs to account for the contribution of the aspectual reading of the sentence depending on the referential type of the argument, and a new account of the syntactic and semantic behaviour of aspectual prefixes. The partitioned sets of lexical items are sublexicons on phonological grounds. These sublexicons differ in terms of phonotactic grammaticality. The degrees of phonotactic grammaticality are tied up with the problem of psychological reality, of how many degrees of this native speakers are sensitive to. The group developed a hierarchical construction network as an extension of the original General Inheritance Network formalism and this framework was then used as a platform for the implementation of the grammar fragments.
Resumo:
Multi-site time series studies of air pollution and mortality and morbidity have figured prominently in the literature as comprehensive approaches for estimating acute effects of air pollution on health. Hierarchical models are generally used to combine site-specific information and estimate pooled air pollution effects taking into account both within-site statistical uncertainty, and across-site heterogeneity. Within a site, characteristics of time series data of air pollution and health (small pollution effects, missing data, highly correlated predictors, non linear confounding etc.) make modelling all sources of uncertainty challenging. One potential consequence is underestimation of the statistical variance of the site-specific effects to be combined. In this paper we investigate the impact of variance underestimation on the pooled relative rate estimate. We focus on two-stage normal-normal hierarchical models and on under- estimation of the statistical variance at the first stage. By mathematical considerations and simulation studies, we found that variance underestimation does not affect the pooled estimate substantially. However, some sensitivity of the pooled estimate to variance underestimation is observed when the number of sites is small and underestimation is severe. These simulation results are applicable to any two-stage normal-normal hierarchical model for combining information of site-specific results, and they can be easily extended to more general hierarchical formulations. We also examined the impact of variance underestimation on the national average relative rate estimate from the National Morbidity Mortality Air Pollution Study and we found that variance underestimation as much as 40% has little effect on the national average.
Resumo:
Target localization has a wide range of military and civilian applications in wireless mobile networks. Examples include battle-field surveillance, emergency 911 (E911), traffc alert, habitat monitoring, resource allocation, routing, and disaster mitigation. Basic localization techniques include time-of-arrival (TOA), direction-of-arrival (DOA) and received-signal strength (RSS) estimation. Techniques that are proposed based on TOA and DOA are very sensitive to the availability of Line-of-sight (LOS) which is the direct path between the transmitter and the receiver. If LOS is not available, TOA and DOA estimation errors create a large localization error. In order to reduce NLOS localization error, NLOS identifcation, mitigation, and localization techniques have been proposed. This research investigates NLOS identifcation for multiple antennas radio systems. The techniques proposed in the literature mainly use one antenna element to enable NLOS identifcation. When a single antenna is utilized, limited features of the wireless channel can be exploited to identify NLOS situations. However, in DOA-based wireless localization systems, multiple antenna elements are available. In addition, multiple antenna technology has been adopted in many widely used wireless systems such as wireless LAN 802.11n and WiMAX 802.16e which are good candidates for localization based services. In this work, the potential of spatial channel information for high performance NLOS identifcation is investigated. Considering narrowband multiple antenna wireless systems, two xvNLOS identifcation techniques are proposed. Here, the implementation of spatial correlation of channel coeffcients across antenna elements as a metric for NLOS identifcation is proposed. In order to obtain the spatial correlation, a new multi-input multi-output (MIMO) channel model based on rough surface theory is proposed. This model can be used to compute the spatial correlation between the antenna pair separated by any distance. In addition, a new NLOS identifcation technique that exploits the statistics of phase difference across two antenna elements is proposed. This technique assumes the phases received across two antenna elements are uncorrelated. This assumption is validated based on the well-known circular and elliptic scattering models. Next, it is proved that the channel Rician K-factor is a function of the phase difference variance. Exploiting Rician K-factor, techniques to identify NLOS scenarios are proposed. Considering wideband multiple antenna wireless systems which use MIMO-orthogonal frequency division multiplexing (OFDM) signaling, space-time-frequency channel correlation is exploited to attain NLOS identifcation in time-varying, frequency-selective and spaceselective radio channels. Novel NLOS identi?cation measures based on space, time and frequency channel correlation are proposed and their performances are evaluated. These measures represent a better NLOS identifcation performance compared to those that only use space, time or frequency.
Resumo:
To mitigate greenhouse gas (GHG) emissions and reduce U.S. dependence on imported oil, the United States (U.S.) is pursuing several options to create biofuels from renewable woody biomass (hereafter referred to as “biomass”). Because of the distributed nature of biomass feedstock, the cost and complexity of biomass recovery operations has significant challenges that hinder increased biomass utilization for energy production. To facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization and tapping unused forest residues, it is proposed to develop biofuel supply chain models based on optimization and simulation approaches. The biofuel supply chain is structured around four components: biofuel facility locations and sizes, biomass harvesting/forwarding, transportation, and storage. A Geographic Information System (GIS) based approach is proposed as a first step for selecting potential facility locations for biofuel production from forest biomass based on a set of evaluation criteria, such as accessibility to biomass, railway/road transportation network, water body and workforce. The development of optimization and simulation models is also proposed. The results of the models will be used to determine (1) the number, location, and size of the biofuel facilities, and (2) the amounts of biomass to be transported between the harvesting areas and the biofuel facilities over a 20-year timeframe. The multi-criteria objective is to minimize the weighted sum of the delivered feedstock cost, energy consumption, and GHG emissions simultaneously. Finally, a series of sensitivity analyses will be conducted to identify the sensitivity of the decisions, such as the optimal site selected for the biofuel facility, to changes in influential parameters, such as biomass availability and transportation fuel price. Intellectual Merit The proposed research will facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization in the renewable biofuel industry. The GIS-based facility location analysis considers a series of factors which have not been considered simultaneously in previous research. Location analysis is critical to the financial success of producing biofuel. The modeling of woody biomass supply chains using both optimization and simulation, combing with the GIS-based approach as a precursor, have not been done to date. The optimization and simulation models can help to ensure the economic and environmental viability and sustainability of the entire biofuel supply chain at both the strategic design level and the operational planning level. Broader Impacts The proposed models for biorefineries can be applied to other types of manufacturing or processing operations using biomass. This is because the biomass feedstock supply chain is similar, if not the same, for biorefineries, biomass fired or co-fired power plants, or torrefaction/pelletization operations. Additionally, the research results of this research will continue to be disseminated internationally through publications in journals, such as Biomass and Bioenergy, and Renewable Energy, and presentations at conferences, such as the 2011 Industrial Engineering Research Conference. For example, part of the research work related to biofuel facility identification has been published: Zhang, Johnson and Sutherland [2011] (see Appendix A). There will also be opportunities for the Michigan Tech campus community to learn about the research through the Sustainable Future Institute.
Resumo:
Invasive exotic plants have altered natural ecosystems across much of North America. In the Midwest, the presence of invasive plants is increasing rapidly, causing changes in ecosystem patterns and processes. Early detection has become a key component in invasive plant management and in the detection of ecosystem change. Risk assessment through predictive modeling has been a useful resource for monitoring and assisting with treatment decisions for invasive plants. Predictive models were developed to assist with early detection of ten target invasive plants in the Great Lakes Network of the National Park Service and for garlic mustard throughout the Upper Peninsula of Michigan. These multi-criteria risk models utilize geographic information system (GIS) data to predict the areas at highest risk for three phases of invasion: introduction, establishment, and spread. An accuracy assessment of the models for the ten target plants in the Great Lakes Network showed an average overall accuracy of 86.3%. The model developed for garlic mustard in the Upper Peninsula resulted in an accuracy of 99.0%. Used as one of many resources, the risk maps created from the model outputs will assist with the detection of ecosystem change, the monitoring of plant invasions, and the management of invasive plants through prioritized control efforts.
Resumo:
Business strategy is important to all organizations. Nearly all Fortune 500 firms are implementing Enterprise Resource Planning (ERP) systems to improve the execution of their business strategy and to improve integration with its information technology (IT) strategy. Successful implementation of these multi-million dollar software systems are requiring new emphasis on change management and on Business and IT strategic alignment. This paper examines business and IT strategic alignment and seeks to explore whether an ERP implementation can drive business process reengineering and business and IT strategic alignment. An overview of business strategy and strategic alignment are followed by an analysis of ERP. The “As-Is/To-Be” process model is then presented and explained as a simple, but vital tool for improving business strategy, strategic alignment, and ERP implementation success.
Resumo:
The CopA copper ATPase of Enterococcus hirae belongs to the family of heavy metal pumping CPx-type ATPases and shares 43% sequence similarity with the human Menkes and Wilson copper ATPases. Due to a lack of suitable protein crystals, only partial three-dimensional structures have so far been obtained for this family of ion pumps. We present a structural model of CopA derived by combining topological information obtained by intramolecular cross-linking with molecular modeling. Purified CopA was cross-linked with different bivalent reagents, followed by tryptic digestion and identification of cross-linked peptides by mass spectrometry. The structural proximity of tryptic fragments provided information about the structural arrangement of the hydrophilic protein domains, which was integrated into a three-dimensional model of CopA. Comparative modeling of CopA was guided by the sequence similarity to the calcium ATPase of the sarcoplasmic reticulum, Serca1, for which detailed structures are available. In addition, known partial structures of CPx-ATPase homologous to CopA were used as modeling templates. A docking approach was used to predict the orientation of the heavy metal binding domain of CopA relative to the core structure, which was verified by distance constraints derived from cross-links. The overall structural model of CopA resembles the Serca1 structure, but reveals distinctive features of CPx-type ATPases. A prominent feature is the positioning of the heavy metal binding domain. It features an orientation of the Cu binding ligands which is appropriate for the interaction with Cu-loaded metallochaperones in solution. Moreover, a novel model of the architecture of the intramembranous Cu binding sites could be derived.