993 resultados para domain model


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Osteoarticular allograft transplantation is a popular treatment method in wide surgical resections with large defects. For this reason hospitals are building bone data banks. Performing the optimal allograft selection on bone banks is crucial to the surgical outcome and patient recovery. However, current approaches are very time consuming hindering an efficient selection. We present an automatic method based on registration of femur bones to overcome this limitation. We introduce a new regularization term for the log-domain demons algorithm. This term replaces the standard Gaussian smoothing with a femur specific polyaffine model. The polyaffine femur model is constructed with two affine (femoral head and condyles) and one rigid (shaft) transformation. Our main contribution in this paper is to show that the demons algorithm can be improved in specific cases with an appropriate model. We are not trying to find the most optimal polyaffine model of the femur, but the simplest model with a minimal number of parameters. There is no need to optimize for different number of regions, boundaries and choice of weights, since this fine tuning will be done automatically by a final demons relaxation step with Gaussian smoothing. The newly developed synthesis approach provides a clear anatomically motivated modeling contribution through the specific three component transformation model, and clearly shows a performance improvement (in terms of anatomical meaningful correspondences) on 146 CT images of femurs compared to a standard multiresolution demons. In addition, this simple model improves the robustness of the demons while preserving its accuracy. The ground truth are manual measurements performed by medical experts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The performance of reanalysis-driven Canadian Regional Climate Model, version 5 (CRCM5) in reproducing the present climate over the North American COordinated Regional climate Downscaling EXperiment domain for the 1989–2008 period has been assessed in comparison with several observation-based datasets. The model reproduces satisfactorily the near-surface temperature and precipitation characteristics over most part of North America. Coastal and mountainous zones remain problematic: a cold bias (2–6 °C) prevails over Rocky Mountains in summertime and all year-round over Mexico; winter precipitation in mountainous coastal regions is overestimated. The precipitation patterns related to the North American Monsoon are well reproduced, except on its northern limit. The spatial and temporal structure of the Great Plains Low-Level Jet is well reproduced by the model; however, the night-time precipitation maximum in the jet area is underestimated. The performance of CRCM5 was assessed against earlier CRCM versions and other RCMs. CRCM5 is shown to have been substantially improved compared to CRCM3 and CRCM4 in terms of seasonal mean statistics, and to be comparable to other modern RCMs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

AIM As technological interventions treating acute myocardial infarction (MI) improve, post-ischemic heart failure increasingly threatens patient health. The aim of the current study was to test whether FADD could be a potential target of gene therapy in the treatment of heart failure. METHODS Cardiomyocyte-specific FADD knockout mice along with non-transgenic littermates (NLC) were subjected to 30 minutes myocardial ischemia followed by 7 days of reperfusion or 6 weeks of permanent myocardial ischemia via the ligation of left main descending coronary artery. Cardiac function were evaluated by echocardiography and left ventricular (LV) catheterization and cardiomyocyte death was measured by Evans blue-TTC staining, TUNEL staining, and caspase-3, -8, and -9 activities. In vitro, H9C2 cells transfected with ether scramble siRNA or FADD siRNA were stressed with chelerythrin for 30 min and cleaved caspase-3 was assessed. RESULTS FADD expression was significantly decreased in FADD knockout mice compared to NLC. Ischemia/reperfusion (I/R) upregulated FADD expression in NLC mice, but not in FADD knockout mice at the early time. FADD deletion significantly attenuated I/R-induced cardiac dysfunction, decreased myocardial necrosis, and inhibited cardiomyocyte apoptosis. Furthermore, in 6 weeks long term permanent ischemia model, FADD deletion significantly reduced the infarct size (from 41.20 ± 3.90% in NLC to 26.83 ± 4.17% in FADD deletion), attenuated myocardial remodeling, improved cardiac function and improved survival. In vitro, FADD knockdown significantly reduced chelerythrin-induced the level of cleaved caspase-3. CONCLUSION Taken together, our results suggest FADD plays a critical role in post-ischemic heart failure. Inhibition of FADD retards heart failure progression. Our data supports the further investigation of FADD as a potential target for genetic manipulation in the treatment of heart failure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Retinal vein occlusion is a leading cause of visual impairment. Experimental models of this condition based on laser photocoagulation of retinal veins have been described and extensively exploited in mammals and larger rodents such as the rat. However, few reports exist on the use of this paradigm in the mouse. The objective of this study was to investigate a model of branch and central retinal vein occlusion in the mouse and characterize in vivo longitudinal retinal morphology alterations using spectral domain optical coherence tomography. Retinal veins were experimentally occluded using laser photocoagulation after intravenous application of Rose Bengal, a photo-activator dye enhancing thrombus formation. Depending on the number of veins occluded, variable amounts of capillary dropout were seen on fluorescein angiography. Vascular endothelial growth factor levels were markedly elevated early and peaked at day one. Retinal thickness measurements with spectral domain optical coherence tomography showed significant swelling (p<0.001) compared to baseline, followed by gradual thinning plateauing two weeks after the experimental intervention (p<0.001). Histological findings at day seven correlated with spectral domain optical coherence tomography imaging. The inner layers were predominantly affected by degeneration with the outer nuclear layer and the photoreceptor outer segments largely preserved. The application of this retinal vein occlusion model in the mouse carries several advantages over its use in other larger species, such as access to a vast range of genetically modified animals. Retinal changes after experimental retinal vein occlusion in this mouse model can be non-invasively quantified by spectral domain optical coherence tomography, and may be used to monitor effects of potential therapeutic interventions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Inteins are protein-splicing elements, most of which contain conserved sequence blocks that define a family of homing endonucleases. Like group I introns that encode such endonucleases, inteins are mobile genetic elements. Recent crystallography and computer modeling studies suggest that inteins consist of two structural domains that correspond to the endonuclease and the protein-splicing elements. To determine whether the bipartite structure of inteins is mirrored by the functional independence of the protein-splicing domain, the entire endonuclease component was deleted from the Mycobacterium tuberculosis recA intein. Guided by computer modeling studies, and taking advantage of genetic systems designed to monitor intein function, the 440-aa Mtu recA intein was reduced to a functional mini-intein of 137 aa. The accuracy of splicing of several mini-inteins was verified. This work not only substantiates structure predictions for intein function but also supports the hypothesis that, like group I introns, mobile inteins arose by an endonuclease gene invading a sequence encoding a small, functional splicing element.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Caveolae are striking morphological features of the plasma membrane of mammalian cells. Caveolins, the major proteins of caveolae, play a crucial role in the formation of these invaginations of the plasma membrane; however, the precise mechanisms involved are only just starting to be unravelled. Recent studies suggest that caveolae are stable structures first generated in the Golgi complex. Their formation and exit from the Golgi complex is associated with caveolin oligomerisation, acquisition of detergent insolubility, and association with cholesterol. Modelling of caveolin-membrane interactions together with in vitro studies of caveolin peptides are providing new insights into how caveolin-lipid interactions could generate the unique architecture of the caveolar domain.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Semantic data models provide a map of the components of an information system. The characteristics of these models affect their usefulness for various tasks (e.g., information retrieval). The quality of information retrieval has obvious important consequences, both economic and otherwise. Traditionally, data base designers have produced parsimonious logical data models. In spite of their increased size, ontologically clearer conceptual models have been shown to facilitate better performance for both problem solving and information retrieval tasks in experimental settings. The experiments producing evidence of enhanced performance for ontologically clearer models have, however, used application domains of modest size. Data models in organizational settings are likely to be substantially larger than those used in these experiments. This research used an experiment to investigate whether the benefits of improved information retrieval performance associated with ontologically clearer models are robust as the size of the application domains increase. The experiment used an application domain of approximately twice the size as tested in prior experiments. The results indicate that, relative to the users of the parsimonious implementation, end users of the ontologically clearer implementation made significantly more semantic errors, took significantly more time to compose their queries, and were significantly less confident in the accuracy of their queries.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Multi-Domain Information Model for organisation of the information bases is presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Software engineering researchers are challenged to provide increasingly more powerful levels of abstractions to address the rising complexity inherent in software solutions. One new development paradigm that places models as abstraction at the forefront of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code.^ Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process.^ The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources.^ At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM's synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise.^ This dissertation investigates how to decouple the DSK from the MoE and subsequently producing a generic model of execution (GMoE) from the remaining application logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis component of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions.^ This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The study examined a modified social cognitive model of domain satisfaction (Lent, 2004). In addition to social cognitive variables and trait positive affect, the model included two aspects of adult attachment, attachment anxiety and avoidance. The study extended recent research on well-being and satisfaction in academic, work, and social domains. The adjusted model was tested in a sample of 454 college students, in order to determine the role of adult attachment variables in explaining social satisfaction, above and beyond the direct and indirect effects of trait positive affect. Confirmatory factor analysis found support for 8 correlated factors in the modified model: social domain satisfaction, positive affect, attachment avoidance, attachment anxiety, social support, social self-efficacy, social outcome expectations, and social goal progress. Three alternative structural models were tested to account for the ways in which attachment anxiety and attachment avoidance might relate to social satisfaction. Results of model testing provided support for a model in which attachment avoidance produced only an indirect path to social satisfaction via self-efficacy and social support. Positive affect, avoidance, social support, social self-efficacy, and goal progress each produced significant direct or indirect paths to social domain satisfaction, though attachment anxiety and social outcome expectations did not contribute to the predictive model. Implications of the findings regarding the modified social cognitive model of social domain satisfaction were discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Practitioners and academics have developed numerous maturity models for many domains in order to measure competency. These initiatives have often been influenced by the Capability Maturity Model. However, an accumulative effort has not been made to generalize the phases of developing a maturity model in any domain. This paper proposes such a methodology and outlines the main phases of generic model development. The proposed methodology is illustrated with the help of examples from two advanced maturity models in the domains of Business Process Management and Knowledge Management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An earlier CRC-CI project on ‘automatic estimating’ (AE) has shown the key benefit of model-based design methodologies in building design and construction to be the provision of timely quantitative cost evaluations. Furthermore, using AE during design improves design options, and results in improved design turn-around times, better design quality and/or lower costs. However, AEs for civil engineering structures do not exist; and research partners in the CRC-CI expressed interest in exploring the development of such a process. This document reports on these investigations. The central objective of the study was to evaluate the benefits and costs of developing an AE for concrete civil engineering works. By studying existing documents and through interviews with design engineers, contractors and estimators, we have established that current civil engineering practices (mainly roads/bridges) do not use model-based planning/design. Drawings are executed in 2D and only completed at the end of lengthy planning/design project management lifecycle stages. We have also determined that estimating plays two important, but different roles. The first is part of project management (which we have called macro level estimating). Estimating in this domain sets project budgets, controls quality delivery and contains costs. The second role is estimating during planning/design (micro level estimating). The difference between the two roles is that the former is performed at the end of various lifecycle stages, whereas the latter is performed at any suitable time during planning/design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the lack of elaborate governance mechanisms is often seen as the main reason for failures of SOA projects, SOA governance is still very low in maturity. In this paper, we follow a design science approach to address this drawback by presenting a framework that can guide organisations in implementing a governance approach for SOA more successfully. We have reviewed the highly advanced IT governance frameworks Cobit and ITIL and mapped them to the SOA domain. The resulting blueprint for an SOA governance framework was refined based on a detailed literature review, expert interviews and a practical application in a government organisation. The proposed framework stresses the need for business representatives to get involved in SOA decisions and to define benefits ownership for services.