992 resultados para brain modeling
Resumo:
Purpose: All currently considered parametric models used for decomposing videokeratoscopy height data are viewercentered and hence describe what the operator sees rather than what the surface is. The purpose of this study was to ascertain the applicability of an object-centered representation to modeling of corneal surfaces. Methods: A three-dimensional surface decomposition into a series of spherical harmonics is considered and compared with the traditional Zernike polynomial expansion for a range of videokeratoscopic height data. Results: Spherical harmonic decomposition led to significantly better fits to corneal surfaces (in terms of the root mean square error values) than the corresponding Zernike polynomial expansions with the same number of coefficients, for all considered corneal surfaces, corneal diameters, and model orders. Conclusions: Spherical harmonic decomposition is a viable alternative to Zernike polynomial decomposition. It achieves better fits to videokeratoscopic height data and has the advantage of an object-centered representation that could be particularly suited to the analysis of multiple corneal measurements.
Resumo:
Over recent years, many scholars have studied the conceptual modeling of information systems based on a theory of ontological expressiveness. This theory offers four constructs that inform properties of modeling grammars in the form of ontological deficiencies, and their implications for development and use of conceptual modeling in IS practice. In this paper we report on the development of a valid and reliable instrument for measuring the perceptions that individuals have of the ontological deficiencies of conceptual modeling grammars. We describe a multi-stage approach for instrument development that incorporates feedback from expert and user panels. We also report on a field test of the instrument with 590 modeling practitioners. We further study how different levels of modeling experience influence user perceptions of ontological deficiencies of modeling grammars. We provide implications for practice and future research.
Resumo:
Purpose: To ascertain the effectiveness of object-centered three-dimensional representations for the modeling of corneal surfaces. Methods: Three-dimensional (3D) surface decomposition into series of basis functions including: (i) spherical harmonics, (ii) hemispherical harmonics, and (iii) 3D Zernike polynomials were considered and compared to the traditional viewer-centered representation of two-dimensional (2D) Zernike polynomial expansion for a range of retrospective videokeratoscopic height data from three clinical groups. The data were collected using the Medmont E300 videokeratoscope. The groups included 10 normal corneas with corneal astigmatism less than −0.75 D, 10 astigmatic corneas with corneal astigmatism between −1.07 D and 3.34 D (Mean = −1.83 D, SD = ±0.75 D), and 10 keratoconic corneas. Only data from the right eyes of the subjects were considered. Results: All object-centered decompositions led to significantly better fits to corneal surfaces (in terms of the RMS error values) than the corresponding 2D Zernike polynomial expansions with the same number of coefficients, for all considered corneal surfaces, corneal diameters (2, 4, 6, and 8 mm), and model orders (4th to 10th radial orders) The best results (smallest RMS fit error) were obtained with spherical harmonics decomposition which lead to about 22% reduction in the RMS fit error, as compared to the traditional 2D Zernike polynomials. Hemispherical harmonics and the 3D Zernike polynomials reduced the RMS fit error by about 15% and 12%, respectively. Larger reduction in RMS fit error was achieved for smaller corneral diameters and lower order fits. Conclusions: Object-centered 3D decompositions provide viable alternatives to traditional viewer-centered 2D Zernike polynomial expansion of a corneal surface. They achieve better fits to videokeratoscopic height data and could be particularly suited to the analysis of multiple corneal measurements, where there can be slight variations in the position of the cornea from one map acquisition to the next.
Brain-derived neurotrophic factor (BDNF) gene : no major impact on antidepressant treatment response
Resumo:
The brain-derived neurotrophic factor (BDNF) has been suggested to play a pivotal role in the aetiology of affective disorders. In order to further clarify the impact of BDNF gene variation on major depression as well as antidepressant treatment response, association of three BDNF polymorphisms [rs7103411, Val66Met (rs6265) and rs7124442] with major depression and antidepressant treatment response was investigated in an overall sample of 268 German patients with major depression and 424 healthy controls. False discovery rate (FDR) was applied to control for multiple testing. Additionally, ten markers in BDNF were tested for association with citalopram outcome in the STAR*D sample. While BDNF was not associated with major depression as a categorical diagnosis, the BDNF rs7124442 TT genotype was significantly related to worse treatment outcome over 6 wk in major depression (p=0.01) particularly in anxious depression (p=0.003) in the German sample. However, BDNF rs7103411 and rs6265 similarly predicted worse treatment response over 6 wk in clinical subtypes of depression such as melancholic depression only (rs7103411: TT
Resumo:
Process modeling is a complex organizational task that requires many iterations and communication between the business analysts and the domain specialists involved in the process modeling. The challenge of process modeling is exacerbated, when the process of modeling has to be performed in a cross-organizational, distributed environment. Some systems have been developed to support collaborative process modeling, all of which use traditional 2D interfaces. We present an environment for collaborative process modeling, using 3D virtual environment technology. We make use of avatar instantiations of user ego centres, to allow for the spatial embodiment of the user with reference to the process model. We describe an innovative prototype collaborative process modeling approach, implemented as a modeling environment in Second Life. This approach leverages the use of virtual environments to provide user context for editing and collaborative exercises. We present a positive preliminary report on a case study, in which a test group modelled a business process using the system in Second Life.
Resumo:
Process models provide visual support for analyzing and improving complex organizational processes. In this paper, we discuss differences of process modeling languages using cognitive effectiveness considerations, to make statements about the ease of use and quality of user experience. Aspects of cognitive effectiveness are of importance for learning a modeling language, creating models, and understanding models. We identify the criteria representational clarity, perceptual discriminability, perceptual immediacy, visual expressiveness, and graphic parsimony to compare and assess the cognitive effectiveness of different modeling languages. We apply these criteria in an analysis of the routing elements of UML Activity Diagrams, YAWL, BPMN, and EPCs, to uncover their relative strengths and weaknesses from a quality of user experience perspective. We draw conclusions that are relevant to the usability of these languages in business process modeling projects.
Resumo:
The value of business process models is dependent not only on the choice of graphical elements in the model, but also on their annotation with additional textual and graphical information. This research discusses the use of text and icons for labeling the graphical constructs in a process model. We use two established verb classification schemes to examine the choice of activity labels in process modeling practice. Based on our findings, we synthesize a set of twenty-five activity label categories. We propose a systematic approach for graphically representing these label categories through the use of graphical icons, such that the resulting process models are easier and more readily understandable by end users. Our findings contribute to an ongoing stream of research investigating the practice of process modeling and thereby contribute to the body of knowledge about conceptual modeling quality overall.
Resumo:
In this paper, we propose a multivariate GARCH model with a time-varying conditional correlation structure. The new double smooth transition conditional correlation (DSTCC) GARCH model extends the smooth transition conditional correlation (STCC) GARCH model of Silvennoinen and Teräsvirta (2005) by including another variable according to which the correlations change smoothly between states of constant correlations. A Lagrange multiplier test is derived to test the constancy of correlations against the DSTCC-GARCH model, and another one to test for another transition in the STCC-GARCH framework. In addition, other specification tests, with the aim of aiding the model building procedure, are considered. Analytical expressions for the test statistics and the required derivatives are provided. Applying the model to the stock and bond futures data, we discover that the correlation pattern between them has dramatically changed around the turn of the century. The model is also applied to a selection of world stock indices, and we find evidence for an increasing degree of integration in the capital markets.
Resumo:
The role of intangible firm capabilities as a source of competitive advantage has come into prominence in marketing strategy literature, due to the Resource Based View. This paper applies the Resource Based View and hypothesizes that strategic flexibility and organisation learning, conceptualised as capabilities, positively effect e-business adoption and competitive advantage. Partial Lease Squares analysis suggest that theoretical constructs function as hypothesised and explain a significant variation on e-business adoption and competitive advantage. Firms adopting e-business should develop capabilities such as strategic flexibility and organisation learning and that vendor firms may segment their potential clients based on these capabilities.
Resumo:
Background Despite being the leading cause of death and disability in the paediatric population, traumatic brain injury (TBI) in this group is largely understudied. Clinical practice within the paediatric intensive care unit (PICU) has been based upon adult guidelines however children are significantly different in terms of mechanism, pathophysiology and consequence of injury. Aim To review TBI management in the PICU and gain insight into potential management strategies. Method To conduct this review, a literature search was conducted using MEDLINE, PUBMED and The Cochrane Library using the following key words; traumatic brain injury; paediatric; hypothermia. There were no date restrictions applied to ensure that past studies, whose principles remain current were not excluded. Results Three areas were identified from the literature search and will be discussed against current acknowledged treatment strategies: Prophylactic hypothermia, brain tissue oxygen tension monitoring and decompressive craniectomy. Conclusion Previous literature has failed to fully address paediatric specific management protocols and we therefore have little evidence-based guidance. This review has shown that there is an emerging and ongoing trend towards paediatric specific TBI research in particular the area of moderate prophylactic hypothermia (MPH).
Resumo:
The RatSLAM system can perform vision based SLAM using a computational model of the rodent hippocampus. When the number of pose cells used to represent space in RatSLAM is reduced, artifacts are introduced that hinder its use for goal directed navigation. This paper describes a new component for the RatSLAM system called an experience map, which provides a coherent representation for goal directed navigation. Results are presented for two sets of real world experiments, including comparison with the original goal memory system's performance in the same environment. Preliminary results are also presented demonstrating the ability of the experience map to adapt to simple short term changes in the environment.
Resumo:
This paper describes an application of decoupled probabilistic world modeling to achieve team planning. The research is based on the principle that the action selection mechanism of a member in a robot team can select an effective action if a global world model is available to all team members. In the real world, the sensors are imprecise, and are individual to each robot, hence providing each robot a partial and unique view about the environment. We address this problem by creating a probabilistic global view on each agent by combining the perceptual information from each robot. This probabilistic view forms the basis for selecting actions to achieve the team goal in a dynamic environment. Experiments have been carried out to investigate the effectiveness of this principle using custom-built robots for real world performance, in addition, to extensive simulation results. The results show an improvement in team effectiveness when using probabilistic world modeling based on perception sharing for team planning.
Resumo:
This paper presents a new approach to improving the effectiveness of autonomous systems that deal with dynamic environments. The basis of the approach is to find repeating patterns of behavior in the dynamic elements of the system, and then to use predictions of the repeating elements to better plan goal directed behavior. It is a layered approach involving classifying, modeling, predicting and exploiting. Classifying involves using observations to place the moving elements into previously defined classes. Modeling involves recording features of the behavior on a coarse grained grid. Exploitation is achieved by integrating predictions from the model into the behavior selection module to improve the utility of the robot's actions. This is in contrast to typical approaches that use the model to select between different strategies or plays. Three methods of adaptation to the dynamic features of the environment are explored. The effectiveness of each method is determined using statistical tests over a number of repeated experiments. The work is presented in the context of predicting opponent behavior in the highly dynamic and multi-agent robot soccer domain (RoboCup).
Resumo:
A persistent question in the development of models for macroeconomic policy analysis has been the relative role of economic theory and evidence in their construction. This paper looks at some popular strategies that involve setting up a theoretical or conceptual model (CM) which is transformed to match the data and then made operational for policy analysis. A dynamic general equilibrium model is constructed that is similar to standard CMs. After calibration to UK data it is used to examine the utility of formal econometric methods in assessing the match of the CM to the data and also to evaluate some standard model-building strategies. Keywords: Policy oriented economic modeling; Model evaluation; VAR models