920 resultados para Model-based optimization
Resumo:
The superior cervical ganglion (SCG) in mammals varies in structure according to developmental age, body size, gender, lateral asymmetry, the size and nuclear content of neurons and the complexity and synaptic coverage of their dendritic trees. In small and medium-sized mammals, neuron number and size increase from birth to adulthood and, in phylogenetic studies, vary with body size. However, recent studies on larger animals suggest that body weight does not, in general, accurately predict neuron number. We have applied design-based stereological tools at the light-microscopic level to assess the volumetric composition of ganglia and to estimate the numbers and sizes of neurons in SCGs from rats, capybaras and horses. Using transmission electron microscopy, we have obtained design-based estimates of the surface coverage of dendrites by postsynaptic apposition zones and model-based estimates of the numbers and sizes of synaptophysin-labelled axo-dendritic synaptic disks. Linear regression analysis of log-transformed data has been undertaken in order to establish the nature of the relationships between numbers and SCG volume (V(scg)). For SCGs (five per species), the allometric relationship for neuron number (N) is N=35,067xV (scg) (0.781) and that for synapses is N=20,095,000xV (scg) (1.328) , the former being a good predictor and the latter a poor predictor of synapse number. Our findings thus reveal the nature of SCG growth in terms of its main ingredients (neurons, neuropil, blood vessels) and show that larger mammals have SCG neurons exhibiting more complex arborizations and greater numbers of axo-dendritic synapses.
Resumo:
High-level microsatellite instability (AISI-H) is demonstrated in 10 to 15% of sporadic colorectal cancers and in most cancers presenting In the inherited condition hereditary nonpolyposis colorectal cancer (HNPCC). Distinction between these categories of MSI-H cancer is of clinical importance and the aim of this study was to assess clinical, pathological, and molecular features that might he discriminatory. One hundred and twelve MSI-H colorectal cancers from families fulfilling the Bethesda criteria were compared with 57 sporadic MSI-H colorectal cancers. HNPCC cancers presented at a lower age (P < 0.001) with no sporadic MSI-H cancer being diagnosed before the age of 57 years. MSI was less extensive in HNPCC cancers with 72% microsatellite markers showing band shifts compared with 87% in sporadic tumors (P < 0.001). Absent immunostaining for hMSH2 was only found in HNPCC tumors. Methylation of bMLH1 was observed in 87% of sporadic cancers but also in 55% of HNPCC tumors that showed loss of expression of hMLH1 (P = 0.02). HNPCC cancers were more frequently characterized by aberrant beta -catenin immunostaining as evidenced by nuclear positivity (P < 0.001). Aberrant p53 immunostaining was infrequent in both groups. There were no differences with respect to 5q loss of heterozygosity or codon 12 K-ras mutation, which were infrequent in both groups. Sporadic MSI-H cancers were more frequently heterogeneous (P < 0.001), poorly differentiated (P = 0.02), mucinous (P = 0.02), and proximally located (P = 0.04) than RNPCC tumors. In sporadic MSI-H cancers, contiguous adenomas were likely to be serrated whereas traditional adenomas were dominant in HNPCC. Lymphocytic infiltration was more pronounced in HNPCC but the results did not reach statistical significance. Overall, HNPCC cancers were more like common colorectal cancer in terms of morphology and expression of beta -catenin whereas sporadic MSI-H cancers displayed features consistent with a different morphogenesis. No individual feature was discriminatory for all RN-PCC cancers. However, a model based on four features was able to classify 94.5% of tumors as sporadic or HNPCC. The finding of multiple differences between sporadic and familial MSI-H colorectal cancer with respect to both genotype and phenotype is consistent with tumorigenesis through parallel evolutionary pathways and emphasizes the importance of studying the two groups separately.
Resumo:
1. The past 15 years has seen the emergence of a new field of neuroscience research based primarily on how the immune system and the central nervous system can interact. A notable example of this interaction occurs when peripheral inflammation, infection or tissue injury activates the hypothalamic- pituitary-adrenal axis (HPA). 2. During such assaults, immune cells release the pro- inflammatory cytokines interleukin (IL)-1, IL-6 and tumour necrosis factor-alpha into the general circulation. 3. These cytokines are believed to act as mediators for HPA axis activation. However, physical limitations of cytokines impede their movement across the blood-brain barrier and, consequently, it has been unclear as to precisely how and where IL-1beta signals cross into the brain to trigger HPA axis activation. 4. Evidence from recent anatomical and functional studies suggests two neuronal networks may be involved in triggering HPA axis activity in response to circulating cytokines. These are catecholamine cells of the medulla oblongata and the circumventricular organs (CVO). 5. The present paper examines the role of CVO in generating HPA axis responses to pro-inflammatory cytokines and culminates with a proposed model based on cytokine signalling primarily involving the area postrema and catecholamine cells in the ventrolateral and dorsal medulla.
Resumo:
Map algebra is a data model and simple functional notation to study the distribution and patterns of spatial phenomena. It uses a uniform representation of space as discrete grids, which are organized into layers. This paper discusses extensions to map algebra to handle neighborhood operations with a new data type called a template. Templates provide general windowing operations on grids to enable spatial models for cellular automata, mathematical morphology, and local spatial statistics. A programming language for map algebra that incorporates templates and special processing constructs is described. The programming language is called MapScript. Example program scripts are presented to perform diverse and interesting neighborhood analysis for descriptive, model-based and processed-based analysis.
Resumo:
The present paper addresses two major concerns that were identified when developing neural network based prediction models and which can limit their wider applicability in the industry. The first problem is that it appears neural network models are not readily available to a corrosion engineer. Therefore the first part of this paper describes a neural network model of CO2 corrosion which was created using a standard commercial software package and simple modelling strategies. It was found that such a model was able to capture practically all of the trends noticed in the experimental data with acceptable accuracy. This exercise has proven that a corrosion engineer could readily develop a neural network model such as the one described below for any problem at hand, given that sufficient experimental data exist. This applies even in the cases when the understanding of the underlying processes is poor. The second problem arises from cases when all the required inputs for a model are not known or can be estimated with a limited degree of accuracy. It seems advantageous to have models that can take as input a range rather than a single value. One such model, based on the so-called Monte Carlo approach, is presented. A number of comparisons are shown which have illustrated how a corrosion engineer might use this approach to rapidly test the sensitivity of a model to the uncertainities associated with the input parameters. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Observations of accelerating seismic activity prior to large earthquakes in natural fault systems have raised hopes for intermediate-term eartquake forecasting. If this phenomena does exist, then what causes it to occur? Recent theoretical work suggests that the accelerating seismic release sequence is a symptom of increasing long-wavelength stress correlation in the fault region. A more traditional explanation, based on Reid's elastic rebound theory, argues that an accelerating sequence of seismic energy release could be a consequence of increasing stress in a fault system whose stress moment release is dominated by large events. Both of these theories are examined using two discrete models of seismicity: a Burridge-Knopoff block-slider model and an elastic continuum based model. Both models display an accelerating release of seismic energy prior to large simulated earthquakes. In both models there is a correlation between the rate of seismic energy release with the total root-mean-squared stress and the level of long-wavelength stress correlation. Furthermore, both models exhibit a systematic increase in the number of large events at high stress and high long-wavelength stress correlation levels. These results suggest that either explanation is plausible for the accelerating moment release in the models examined. A statistical model based on the Burridge-Knopoff block-slider is constructed which indicates that stress alone is sufficient to produce accelerating release of seismic energy with time prior to a large earthquake.
Resumo:
We focus on mixtures of factor analyzers from the perspective of a method for model-based density estimation from high-dimensional data, and hence for the clustering of such data. This approach enables a normal mixture model to be fitted to a sample of n data points of dimension p, where p is large relative to n. The number of free parameters is controlled through the dimension of the latent factor space. By working in this reduced space, it allows a model for each component-covariance matrix with complexity lying between that of the isotropic and full covariance structure models. We shall illustrate the use of mixtures of factor analyzers in a practical example that considers the clustering of cell lines on the basis of gene expressions from microarray experiments. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
In this paper, we present the results of a qualitative study of subordinate perceptions of leaders. The study represents a preliminary test of a model based on Affective Events Theory, which posits that leaders who are seen to be effective shape the affective events that determine employees' attitudes and behaviours in the workplace. Within this framework, we argue that effective leaders ameliorate employees' hassles by providing frequent, small emotional uplifts. The resulting positive affective states are then proposed to lead to more positive employee attitudes and behaviours, and more positive regard for the leader. Importantly, leaders who demonstrate these ameliorating behaviours are likely to require high levels of emotional intelligence, defined in terms of the ability to recognise, understand, and manage emotions in self and others. To investigate this model, we conducted interviews and focus groups with 10 leaders and 24 employees. Results confirmed that these processes do indeed exist in the workplace. In particular, leaders who were seen by employees to provide continuous small emotional uplifts were consistently held to be the most effective. Study participants were especially affected by negative events (or hassles). Leaders who failed to deal with hassles or, worse still, were the source of hassles, were consistently seen to be less effective. We conclude with a discussion of implications for practicing managers, and suggest that our exploratory findings provide justification for emotional intelligence training as a means to improve leader perceptions and effectiveness. [Abstract from author]
Resumo:
Abstract — The analytical methods based on evaluation models of interactive systems were proposed as an alternative to user testing in the last stages of the software development due to its costs. However, the use of isolated behavioral models of the system limits the results of the analytical methods. An example of these limitations relates to the fact that they are unable to identify implementation issues that will impact on usability. With the introduction of model-based testing we are enable to test if the implemented software meets the specified model. This paper presents an model-based approach for test cases generation from the static analysis of source code.
Resumo:
Graphical user interfaces (GUIs) make software easy to use by providing the user with visual controls. Therefore, correctness of GUI's code is essential to the correct execution of the overall software. Models can help in the evaluation of interactive applications by allowing designers to concentrate on its more important aspects. This paper presents a generic model for language-independent reverse engineering of graphical user interface based applications, and we explore the integration of model-based testing techniques in our approach, thus allowing us to perform fault detection. A prototype tool has been constructed, which is already capable of deriving and testing a user interface behavioral model of applications written in Java/Swing.
Resumo:
In the context of an e ort to develop methodologies to support the evaluation of interactive system, this paper investigates an approach to detect graphical user interface bad smells. Our approach consists in detecting user interface bad smells through model-based reverse engineering from source code. Models are used to de ne which widgets are present in the interface, when can particular graphical user interface (GUI) events occur, under which conditions, which system actions are executed, and which GUI state is generated next.
Resumo:
RESUMO: Com a constante evolução das novas tecnologias de informação, as organizações têm necessidade de implementar novas ferramentas de gestão de forma a gerar vantagens competitivas, é neste sentido que esta dissertação visa propor a implementação do Modelo de Gestão estratégica baseado no Balanced Scorecard numa Empresa Interbancária de Serviços, com o objectivo de auxiliar na criação de capacidades competitivas, mediante uma avaliação de desempenho mais precisa e estruturada. Esta ferramenta surgiu como alternativa aos sistemas antigos e tradicionais cujo objectivo consistia no controlo das actividades realizadas pelos funcionários, sendo que esta metodologia veio colocar a estratégia no centro das atenções e não somente o controlo, mas só em 1992 é que ela foi reconhecida como um processo revolucionário que alterava todo o processo de gestão padrão nas empresas. Nesta dissertação ira se abordar alguns conceitos desta metodologia, enfatizando, o Mapa de Estratégia, vantagens pela obtenção desta metodologia; e um estudo prático da aplicação do Modelo de Gestão Estratégica na EMIS. Por fim, é de realçar que a grande importância que as empresas têm vindo a atribuir a esta metodologia e a investir nela, garante a sua relevância como tema de pesquisa num futuro próximo. ABSTRACT: With the constant development of new information technologies, organizations need to implement new management tools in order to generate competitive advantages, in this sense this thesis aims to propose the implementation of the Strategic Management Model based on Balanced Scorecard in a Company Interbank services with the aim of assisting in the creation of competitive capabilities through a performance assessment more precise and structured. This tool has emerged as an alternative to traditional legacy systems and was aimed at controlling the activities performed by employees, the methodology that has put the strategy in the spotlight instead of control, but not until 1992 was it recognized as a revolutionary process that changed the entire standard management process in companies. In this dissertation we discuss concepts of this methodology, emphasizing the strategic map for obtaining advantages from it, and a practical application of Strategic Management Model in EMIS. Finally, it is noteworthy that the great importance that companies have been giving to this methodology and the investment they have been doing on it, guarantees its relevance as a research subject in the near future.
Resumo:
Nesta dissertação pretende-se simular o comportamento dinâmico de uma laje de betão armado aplicando o Método de Elementos Finitos através da sua implementação no programa FreeFEM++. Este programa permite-nos a análise do modelo matemático tridimensional da Teoria da Elasticidade Linear, englobando a Equação de Equilíbrio, Equação de Compatibilidade e Relações Constitutivas. Tratando-se de um problema dinâmico é necessário recorrer a métodos numéricos de Integração Directa de modo a obter a resposta em termos de deslocamento ao longo do tempo. Para este trabalho escolhemos o Método de Newmark e o Método de Euler para a discretização temporal, um pela sua popularidade e o outro pela sua simplicidade de implementação. Os resultados obtidos pelo FreeFEM++ são validados através da comparação com resultados adquiridos a partir do SAP2000 e de Soluções Teóricas, quando possível.
Computational evaluation of hydraulic system behaviour with entrapped air under rapid pressurization
Resumo:
The pressurization of hydraulic systems containing entrapped air is considered a critical condition for the infrastructure's security due to transient pressure variations often occurred. The objective of the present study is the computational evaluation of trends observed in variation of maximum surge pressure resulting from rapid pressurizations. The comparison of the results with those obtained in previous studies is also undertaken. A brief state of art in this domain is presented. This research work is applied to an experimental system having entrapped air in the top of a vertical pipe section. The evaluation is developed through the elastic model based on the method of characteristics, considering a moving liquid boundary, with the results being compared with those achieved with the rigid liquid column model.
Computational evaluation of hydraulic system behaviour with entrapped air under rapid pressurization
Resumo:
The pressurization of hydraulic systems containing entrapped air is considered a critical condition for the infrastructure's security due to transient pressure variations often occurred. The objective of the present study is the computational evaluation of trends observed in variation of maximum surge pressure resulting from rapid pressurizations. The comparison of the results with those obtained in previous studies is also undertaken. A brief state of art in this domain is presented. This research work is applied to an experimental system having entrapped air in the top of a vertical pipe section. The evaluation is developed through the elastic model based on the method of characteristics, considering a moving liquid boundary, with the results being compared with those achieved with the rigid liquid column model.