906 resultados para computational models
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.
Resumo:
Extratropical transition (ET) has eluded objective identification since the realisation of its existence in the 1970s. Recent advances in numerical, computational models have provided data of higher resolution than previously available. In conjunction with this, an objective characterisation of the structure of a storm has now become widely accepted in the literature. Here we present a method of combining these two advances to provide an objective method for defining ET. The approach involves applying K-means clustering to isolate different life-cycle stages of cyclones and then analysing the progression through these stages. This methodology is then tested by applying it to five recent years from the European Centre of Medium-Range Weather Forecasting operational analyses. It is found that this method is able to determine the general characteristics for ET in the Northern Hemisphere. Between 2008 and 2012, 54% (±7, 32 of 59) of Northern Hemisphere tropical storms are estimated to undergo ET. There is great variability across basins and time of year. To fully capture all the instances of ET is necessary to introduce and characterise multiple pathways through transition. Only one of the three transition types needed has been previously well-studied. A brief description of the alternate types of transitions is given, along with illustrative storms, to assist with further study
Resumo:
The recent advances in CMOS technology have allowed for the fabrication of transistors with submicronic dimensions, making possible the integration of tens of millions devices in a single chip that can be used to build very complex electronic systems. Such increase in complexity of designs has originated a need for more efficient verification tools that could incorporate more appropriate physical and computational models. Timing verification targets at determining whether the timing constraints imposed to the design may be satisfied or not. It can be performed by using circuit simulation or by timing analysis. Although simulation tends to furnish the most accurate estimates, it presents the drawback of being stimuli dependent. Hence, in order to ensure that the critical situation is taken into account, one must exercise all possible input patterns. Obviously, this is not possible to accomplish due to the high complexity of current designs. To circumvent this problem, designers must rely on timing analysis. Timing analysis is an input-independent verification approach that models each combinational block of a circuit as a direct acyclic graph, which is used to estimate the critical delay. First timing analysis tools used only the circuit topology information to estimate circuit delay, thus being referred to as topological timing analyzers. However, such method may result in too pessimistic delay estimates, since the longest paths in the graph may not be able to propagate a transition, that is, may be false. Functional timing analysis, in turn, considers not only circuit topology, but also the temporal and functional relations between circuit elements. Functional timing analysis tools may differ by three aspects: the set of sensitization conditions necessary to declare a path as sensitizable (i.e., the so-called path sensitization criterion), the number of paths simultaneously handled and the method used to determine whether sensitization conditions are satisfiable or not. Currently, the two most efficient approaches test the sensitizability of entire sets of paths at a time: one is based on automatic test pattern generation (ATPG) techniques and the other translates the timing analysis problem into a satisfiability (SAT) problem. Although timing analysis has been exhaustively studied in the last fifteen years, some specific topics have not received the required attention yet. One such topic is the applicability of functional timing analysis to circuits containing complex gates. This is the basic concern of this thesis. In addition, and as a necessary step to settle the scenario, a detailed and systematic study on functional timing analysis is also presented.
Resumo:
This work analyses a study on natural ventilation and its relation to the urban legislation versus the building types in an urban fraction of coastal area of Praia do Meio in the city of Natal/RN, approaching the type or types of land use most appropriate to this limited urban fraction. The objective of this study is to analyse the effects of the present legislation as well as the types of buildings in this area on the natural ventilation. This urban fraction was selected because it is one of the sites from where the wind flows into the city of Natal. This research is based on the hypothesis stating that the reduction on the porosity of the urban soil (decrease in the set back/boundary clearance), and an increase in the form (height of the buildings) rise the level of the ventilation gradient, consequently causing a reduction on the wind speed at the lowest part of the buildings. Three-dimensional computational models were used to produce the modes of occupation allowed in the urban fraction within the area under study. A Computational Fluid Dynamics (CFD) software was also used to analyse the modes of land occupation. Following simulation, a statistical assessment was carried out for validation of the hypothesis. It was concluded that the reduction in the soil porosity as a consequence of the rates that defined the minimum boundary clearance between the building and the boundary of the plot (and consequently the set back), as well as the increase in the building form (height of the buildings) caused a reduction in the wind speed, thus creating heat islands
Resumo:
The Noise Pollution causes degradation in the quality of the environment and presents itself as one of the most common environmental problems in the big cities. An Urban environment present scenario and their complex acoustic study need to consider the contribution of various noise sources. Accordingly to computational models through mapping and prediction of acoustic scene become important, because they enable the realization of calculations, analyzes and reports, allowing the interpretation of satisfactory results. The study neighborhood is the neighborhood of Lagoa Nova, a central area of the city of Natal, which will undergo major changes in urban space due to urban mobility projects planned for the area around the stadium and the consequent changes of urban form and traffic. Thus, this study aims to evaluate the noise impact caused by road and morphological changes around the stadium Arena das Dunas in the neighborhood of Lagoa Nova, through on-site measurements and mapping using the computational model SoundPLAN year 2012 and the scenario evolution acoustic for the year 2017. For this analysis was the construction of the first acoustic mapping based on current diagnostic acoustic neighborhood, physical mapping, classified vehicle count and measurement of sound pressure level, and to build the prediction of noise were observed for the area study the modifications provided for traffic, urban form and mobility work. In this study, it is concluded that the sound pressure levels of the year in 2012 and 2017 extrapolate current legislation. For the prediction of noise were numerous changes in the acoustic scene, in which the works of urban mobility provided will improve traffic flow, thus reduce the sound pressure level where interventions are expected
Resumo:
The processing of materials through plasma has been growing enough in the last times in several technological applications, more specifically in surfaces treatment. That growth is due, mainly, to the great applicability of plasmas as energy source, where it assumes behavior thermal, chemical and/or physical. On the other hand, the multiplicity of simultaneous physical effects (thermal, chemical and physical interactions) present in plasmas increases the complexity for understanding their interaction with solids. In that sense, as an initial step for the development of that subject, the present work treats of the computational simulation of the heating and cooling processes of steel and copper samples immersed in a plasma atmosphere, by considering two experimental geometric configurations: hollow and plane cathode. In order to reach such goal, three computational models were developed in Fortran 90 language: an one-dimensional transient model (1D, t), a two-dimensional transient model (2D, t) and a two-dimensional transient model (2D, t) which take into account the presence of a sample holder in the experimental assembly. The models were developed based on the finite volume method and, for the two-dimensional configurations, the effect of hollow cathode on the sample was considered as a lateral external heat source. The main results obtained with the three computational models, as temperature distribution and thermal gradients in the samples and in the holder, were compared with those developed by the Laboratory of Plasma, LabPlasma/UFRN, and with experiments available in the literature. The behavior showed indicates the validity of the developed codes and illustrate the need of the use of such computational tool in that process type, due to the great easiness of obtaining thermal information of interest
Resumo:
Episodic memory refers to the recollection of what, where and when a specific event occurred. Hippocampus is a key structure in this type of memory. Computational models suggest that the dentate gyrus (DG) and the CA3 hippocampal subregions are involved in pattern separation and the rapid acquisition of episodic memories, while CA1 is involved in memory consolidation. However there are few studies with animal models that access simultaneously the aspects ‗what-where-when . Recently, an object recognition episodic-like memory task in rodents was proposed. This task consists of two sample trials and a test phase. In sample trial one, the rat is exposed to four copies of an object. In sample trial two, one hour later, the rat is exposed to four copies of a different object. In the test phase, 1 h later, two copies of each of the objects previously used are presented. One copy of the object used in sample trial one is located in a different place, and therefore it is expected to be the most explored object.However, the short retention delay of the task narrows its applications. This study verifies if this task can be evoked after 24h and whether the pharmacological inactivation of the DG/CA3 and CA1 subregions could differentially impair the acquisition of the task described. Validation of the task with a longer interval (24h) was accomplished (animals showed spatiotemporal object discrimination and scopolamine (1 mg/kg, ip) injected pos-training impaired performance). Afterwards, the GABA agonist muscimol, (0,250 μg/μl; volume = 0,5 μl) or saline were injected in the hippocampal subregions fifteen minutes before training. Pre-training inactivation of the DG/CA3 subregions impaired the spatial discrimination of the objects (‗where ), while the temporal discrimination (‗when ) was preserved. Rats treated with muscimol in the CA1 subregion explored all the objects equally well, irrespective of place or presentation time. Our results corroborate the computational models that postulate a role for DG/CA3 in spatial pattern separation, and a role for CA1 in the consolidation process of different mnemonic episodes
Resumo:
Circadian rhythms are variations in physiological processes that help living beings to adapt to environmental cycles. These rhythms are generated and are synchronized to the dark light cycle through the suprachiasmatic nucleus. The integrity of circadian rhythmicity has great implication on human health. Currently it is known that disturbances in circadian rhythms are related to some problems of today such as obesity, propensity for certain types of cancer and mental disorders for example. The circadian rhythmicity can be studied through experiments with animal models and in humans directly. In this work we use computational models to gather experimental results from the literature and explain the results of our laboratory. Another focus of this study was to analyze data rhythms of activity and rest obtained experimentally. Here we made a review on the use of variables used to analyze these data and finally propose an update on how to calculate these variables. Our models were able to reproduce the main experimental results in the literature and provided explanations for the results of experiments performed in our laboratory. The new variables used to analyze the rhythm of activity and rest in humans were more efficient to describe the fragmentation and synchronization of this rhythm. Therefore, the work contributed improving existing tools for the study of circadian rhythms in mammals
Resumo:
Nowadays there is great interest in damage identification using non destructive tests. Predictive maintenance is one of the most important techniques that are based on analysis of vibrations and it consists basically of monitoring the condition of structures or machines. A complete procedure should be able to detect the damage, to foresee the probable time of occurrence and to diagnosis the type of fault in order to plan the maintenance operation in a convenient form and occasion. In practical problems, it is frequent the necessity of getting the solution of non linear equations. These processes have been studied for a long time due to its great utility. Among the methods, there are different approaches, as for instance numerical methods (classic), intelligent methods (artificial neural networks), evolutions methods (genetic algorithms), and others. The characterization of damages, for better agreement, can be classified by levels. A new one uses seven levels of classification: detect the existence of the damage; detect and locate the damage; detect, locate and quantify the damages; predict the equipment's working life; auto-diagnoses; control for auto structural repair; and system of simultaneous control and monitoring. The neural networks are computational models or systems for information processing that, in a general way, can be thought as a device black box that accepts an input and produces an output. Artificial neural nets (ANN) are based on the biological neural nets and possess habilities for identification of functions and classification of standards. In this paper a methodology for structural damages location is presented. This procedure can be divided on two phases. The first one uses norms of systems to localize the damage positions. The second one uses ANN to quantify the severity of the damage. The paper concludes with a numerical application in a beam like structure with five cases of structural damages with different levels of severities. The results show the applicability of the presented methodology. A great advantage is the possibility of to apply this approach for identification of simultaneous damages.
Resumo:
The objective of this paper is to utilize the SIPOC, flowchart and IDEF0 modeling techniques combined to elaborate the conceptual model of a simulation project. It is intended to identify the contribution of these techniques in the elaboration of the computational model. To illustrate such application, a practical case of a high-end technology enterprise is presented. The paper concludes that the proposed approach eases the elaboration of the computational model. © 2008 IEEE.
Resumo:
The structure of an ecological community is shaped by several temporally varying mechanisms. Such mechanisms depend in a large extent on species interactions, which are themselves manifestations of the community's own structure. Dynamics and structure are then mutually determined. The assembly models are mathematical or computational models which simulate the dynamics of ecological communities resulting from a historical balance among colonizations and local extinctions, by means of sequential species introductions and their interactions with resident species. They allow analyzing that double relationship between structure and dynamics, recognizing its temporal dependence. It is assumed two spatiotemporal scales: (i) a local scale, where species co-occur and have their dynamics explicitly simulated and (ii) a regional scale without dynamics, representing the external environment which the potential colonizers come from. The mathematical and computational models used to simulate the local dynamics are quite variable, being distinguished according to the complexity mode of population representation, including or not intra or interspecific differences. They determine the community state, in terms of abundances, interactions, and extinctions between two successive colonization attempts. The schedules of species introductions also follow diverse (although arbitrary) rules, which vary qualitatively with respect to species appearance mode, whether by speciation or by immigration, and quantitatively with respect to their rates of introduction into the community. Combining these criteria arises a great range of approaches for assembly models, each with its own limitations and questions, but contributing in a complementary way to elucidate the mechanisms structuring natural communities. To present such approaches, still incipient as research fields in Brazil, to describe some methods of analysis and to discuss the implications of their assumptions for the understanding of ecological patterns are the objectives of the present review.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Estudo estrutural de quinases dependentes de ciclinas por métodos de modelagem molecular comparativa
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)