77 resultados para Conjuntos densificables


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we investigate the spectra of band structures and transmittance in magnonic quasicrystals that exhibit the so-called deterministic disorders, specifically, magnetic multilayer systems, which are built obeying to the generalized Fibonacci (only golden mean (GM), silver mean (SM), bronze mean (BM), copper mean (CM) and nickel mean (NM) cases) and k-component Fibonacci substitutional sequences. The theoretical model is based on the Heisenberg Hamiltonian in the exchange regime, together with the powerful transfer matrix method, and taking into account the RPA approximation. The magnetic materials considered are simple cubic ferromagnets. Our main interest in this study is to investigate the effects of quasiperiodicity on the physical properties of the systems mentioned by analyzing the behavior of spin wave propagation through the dispersion and transmission spectra of these structures. Among of these results we detach: (i) the fragmentation of the bulk bands, which in the limit of high generations, become a Cantor set, and the presence of the mig-gap frequency in the spin waves transmission, for generalized Fibonacci sequence, and (ii) the strong dependence of the magnonic band gap with respect to the parameters k, which determines the amount of different magnetic materials are present in quasicrystal, and n, which is the generation number of the sequence k-component Fibonacci. In this last case, we have verified that the system presents a magnonic band gap, whose width and frequency region can be controlled by varying k and n. In the exchange regime, the spin waves propagate with frequency of the order of a few tens of terahertz (THz). Therefore, from a experimental and technological point of view, the magnonic quasicrystals can be used as carriers or processors of informations, and the magnon (the quantum spin wave) is responsible for this transport and processing

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, we study the survival cure rate model proposed by Yakovlev et al. (1993), based on a competing risks structure concurring to cause the event of interest, and the approach proposed by Chen et al. (1999), where covariates are introduced to model the risk amount. We focus the measurement error covariates topics, considering the use of corrected score method in order to obtain consistent estimators. A simulation study is done to evaluate the behavior of the estimators obtained by this method for finite samples. The simulation aims to identify not only the impact on the regression coefficients of the covariates measured with error (Mizoi et al. 2007) but also on the coefficients of covariates measured without error. We also verify the adequacy of the piecewise exponential distribution to the cure rate model with measurement error. At the end, model applications involving real data are made

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, genetically encoded optical indicators have emerged as noninvasive tools of high spatial and temporal resolution utilized to monitor the activity of individual neurons and specific neuronal populations. The increasing number of new optogenetic indicators, together with the absence of comparisons under identical conditions, has generated difficulty in choosing the most appropriate protein, depending on the experimental design. Therefore, the purpose of our study was to compare three recently developed reporter proteins: the calcium indicators GCaMP3 and R-GECO1, and the voltage indicator VSFP butterfly1.2. These probes were expressed in hippocampal neurons in culture, which were subjected to patchclamp recordings and optical imaging. The three groups (each one expressing a protein) exhibited similar values of membrane potential (in mV, GCaMP3: -56 ±8.0, R-GECO1: -57 ±2.5; VSFP: -60 ±3.9, p = 0.86); however, the group of neurons expressing VSFP showed a lower average of input resistance than the other groups (in Mohms, GCaMP3: 161 ±18.3; GECO1-R: 128 ±15.3; VSFP: 94 ±14.0, p = 0.02). Each neuron was submitted to current injections at different frequencies (10 Hz, 5 Hz, 3 Hz, 1.5 Hz, and 0.7 Hz) and their fluorescence responses were recorded in time. In our study, only 26.7% (4/15) of the neurons expressing VSFP showed detectable fluorescence signal in response to action potentials (APs). The average signal-to-noise ratio (SNR) obtained in response to five spikes (at 10 Hz) was small (1.3 ± 0.21), however the rapid kinetics of the VSFP allowed discrimination of APs as individual peaks, with detection of 53% of the evoked APs. Frequencies below 5 Hz and subthreshold signals were undetectable due to high noise. On the other hand, calcium indicators showed the greatest change in fluorescence following the same protocol (five APs at 10 Hz). Among the GCaMP3 expressing neurons, 80% (8/10) exhibited signal, with an average SNR value of 21 ±6.69 (soma), while for the R-GECO1 neurons, 50% (2/4) of the neurons had signal, with a mean SNR value of 52 ±19.7 (soma). For protocols at 10 Hz, 54% of the evoked APs were detected with GCaMP3 and 85% with R-GECO1. APs were detectable in all the analyzed frequencies and fluorescence signals were detected from subthreshold depolarizations as well. Because GCaMP3 is the most likely to yield fluorescence signal and with high SNR, some experiments were performed only with this probe. We demonstrate that GCaMP3 is effective in detecting synaptic inputs (involving Ca2+ influx), with high spatial and temporal resolution. Differences were also observed between the SNR values resulting from evoked APs, compared to spontaneous APs. In recordings of groups of cells, GCaMP3 showed clear discrimination between activated and silent cells, and reveals itself as a potential tool in studies of neuronal synchronization. Thus, our results indicate that the presently available calcium indicators allow detailed studies on neuronal communication, ranging from individual dendritic spines to the investigation of events of synchrony in neuronal networks genetically defined. In contrast, studies employing VSFPs represent a promising technology for monitoring neural activity and, although still to be improved, they may become more appropriate than calcium indicators, since neurons work on a time scale faster than events of calcium may foresee

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The primary and accessory optic systems comprise two set of retinorecipient neural clusters. In this study, these visual related centers in the rock cavy were evaluated by using the retinal innervations pattern and Nissl staining cytoarchigtecture. After unilateral intraocular injection of cholera toxin B subunit and immunohistochemical reaction of coronal and sagittal sections from the diencephalon and midbrain region of rock cavy. Three subcortical centres of primary visual system were identified, superior colliculus, lateral geniculate complex and pretectal complex. The lateral geniculate complex is formed by a series of nuclei receiving direct visual information from the retina, dorsal lateral geniculate nucleus, intergeniculate leaflet and ventral lateral geniculate nucleus. The pretectal complex is formed by series of pretectal nuclei, medial pretectal nucleus, olivary pretectal nucleus, posterior pretectal nucleus, nucleus of the optic tract and anterior pretectal nucleus. In the accessory optic system, retinal terminals were observed in the dorsal terminal, lateral terminal and medial terminal nuclei as well as in the interstitial nucleus of the superior fasciculus, posterior fibres. All retinorecipient nuclei received bilateral input, with a contralateral predominance. This is the first study of this nature in the rock cavy and the results are compared with the data obtained for other species. The investigation represents a contribution to the knowledge regarding the organization of visual optic systems in relation to the biology of species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Criticism done to the undergraduate training process of the psychologist in Brazil raised debates known as "dilemmas of training". In recent years the classic training model, based on the Minimum Curriculum has undergone a series of changes after the National Curriculum Guidelines (DCN), modifying the context of courses. Thus, this paper aimed to investigate, in a post- DCN context how undergraduate courses in Psychology in Brazil have been dealing with the dilemmas of training. So, we decided to analyze the Course Pedagogical Projects (CPPs) of Psychology in the country. Forty CPPs, selected by region, academic organization and legal status were collected. The data was grouped into three blocks of discussions: theoretical, philosophical and pedagogical foundations; curriculum emphases and disciplines; and professional practices. The results were grouped into four sets of dilemmas: a) ethical and political; b) theoreticalepistemological; c) professional practice of the psychologist and d) academic-scientific. Courses claim a socially committed, generalist, pluralistic training, focusing on research, non-dissociation of teaching-research-extension, interdisciplinary training and defending a vision of man and of critical and reflective and non-individualistic psychology. The curriculum keeps the almost exclusive teaching of the classical areas of traditional fields of applied Psychology. Training is content based. The clinic is hegemonic, both in theory and in application fields. The historical debate is scarce and themes linked to the Brazilian reality are missing, despite having social policies present in the curricula. Currently, DCNs have a much greater impact on courses due to the influence of the control agencies, fruit of current educational policy, and the result is felt in the homogenization of curriculum discourses

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is combined with the potential of the technique of near infrared spectroscopy - NIR and chemometrics order to determine the content of diclofenac tablets, without destruction of the sample, to which was used as the reference method, ultraviolet spectroscopy, which is one of the official methods. In the construction of multivariate calibration models has been studied several types of pre-processing of NIR spectral data, such as scatter correction, first derivative. The regression method used in the construction of calibration models is the PLS (partial least squares) using NIR spectroscopic data of a set of 90 tablets were divided into two sets (calibration and prediction). 54 were used in the calibration samples and the prediction was used 36, since the calibration method used was crossvalidation method (full cross-validation) that eliminates the need for a validation set. The evaluation of the models was done by observing the values of correlation coefficient R 2 and RMSEC mean square error (calibration error) and RMSEP (forecast error). As the forecast values estimated for the remaining 36 samples, which the results were consistent with the values obtained by UV spectroscopy

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Car Rental Salesman Problem (CaRS) is a variant of the classical Traveling Salesman Problem which was not described in the literature where a tour of visits can be decomposed into contiguous paths that may be performed in different rental cars. The aim is to determine the Hamiltonian cycle that results in a final minimum cost, considering the cost of the route added to the cost of an expected penalty paid for each exchange of vehicles on the route. This penalty is due to the return of the car dropped to the base. This paper introduces the general problem and illustrates some examples, also featuring some of its associated variants. An overview of the complexity of this combinatorial problem is also outlined, to justify their classification in the NPhard class. A database of instances for the problem is presented, describing the methodology of its constitution. The presented problem is also the subject of a study based on experimental algorithmic implementation of six metaheuristic solutions, representing adaptations of the best of state-of-the-art heuristic programming. New neighborhoods, construction procedures, search operators, evolutionary agents, cooperation by multi-pheromone are created for this problem. Furtermore, computational experiments and comparative performance tests are conducted on a sample of 60 instances of the created database, aiming to offer a algorithm with an efficient solution for this problem. These results will illustrate the best performance reached by the transgenetic algorithm in all instances of the dataset

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Quadratic Minimum Spanning Tree Problem (QMST) is a version of the Minimum Spanning Tree Problem in which, besides the traditional linear costs, there is a quadratic structure of costs. This quadratic structure models interaction effects between pairs of edges. Linear and quadratic costs are added up to constitute the total cost of the spanning tree, which must be minimized. When these interactions are restricted to adjacent edges, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). AQMST and QMST are NP-hard problems that model several problems of transport and distribution networks design. In general, AQMST arises as a more suitable model for real problems. Although, in literature, linear and quadratic costs are added, in real applications, they may be conflicting. In this case, it may be interesting to consider these costs separately. In this sense, Multiobjective Optimization provides a more realistic model for QMST and AQMST. A review of the state-of-the-art, so far, was not able to find papers regarding these problems under a biobjective point of view. Thus, the objective of this Thesis is the development of exact and heuristic algorithms for the Biobjective Adjacent Only Quadratic Spanning Tree Problem (bi-AQST). In order to do so, as theoretical foundation, other NP-hard problems directly related to bi-AQST are discussed: the QMST and AQMST problems. Bracktracking and branch-and-bound exact algorithms are proposed to the target problem of this investigation. The heuristic algorithms developed are: Pareto Local Search, Tabu Search with ejection chain, Transgenetic Algorithm, NSGA-II and a hybridization of the two last-mentioned proposals called NSTA. The proposed algorithms are compared to each other through performance analysis regarding computational experiments with instances adapted from the QMST literature. With regard to exact algorithms, the analysis considers, in particular, the execution time. In case of the heuristic algorithms, besides execution time, the quality of the generated approximation sets is evaluated. Quality indicators are used to assess such information. Appropriate statistical tools are used to measure the performance of exact and heuristic algorithms. Considering the set of instances adopted as well as the criteria of execution time and quality of the generated approximation set, the experiments showed that the Tabu Search with ejection chain approach obtained the best results and the transgenetic algorithm ranked second. The PLS algorithm obtained good quality solutions, but at a very high computational time compared to the other (meta)heuristics, getting the third place. NSTA and NSGA-II algorithms got the last positions

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although some individual techniques of supervised Machine Learning (ML), also known as classifiers, or algorithms of classification, to supply solutions that, most of the time, are considered efficient, have experimental results gotten with the use of large sets of pattern and/or that they have a expressive amount of irrelevant data or incomplete characteristic, that show a decrease in the efficiency of the precision of these techniques. In other words, such techniques can t do an recognition of patterns of an efficient form in complex problems. With the intention to get better performance and efficiency of these ML techniques, were thought about the idea to using some types of LM algorithms work jointly, thus origin to the term Multi-Classifier System (MCS). The MCS s presents, as component, different of LM algorithms, called of base classifiers, and realized a combination of results gotten for these algorithms to reach the final result. So that the MCS has a better performance that the base classifiers, the results gotten for each base classifier must present an certain diversity, in other words, a difference between the results gotten for each classifier that compose the system. It can be said that it does not make signification to have MCS s whose base classifiers have identical answers to the sames patterns. Although the MCS s present better results that the individually systems, has always the search to improve the results gotten for this type of system. Aim at this improvement and a better consistency in the results, as well as a larger diversity of the classifiers of a MCS, comes being recently searched methodologies that present as characteristic the use of weights, or confidence values. These weights can describe the importance that certain classifier supplied when associating with each pattern to a determined class. These weights still are used, in associate with the exits of the classifiers, during the process of recognition (use) of the MCS s. Exist different ways of calculating these weights and can be divided in two categories: the static weights and the dynamic weights. The first category of weights is characterizes for not having the modification of its values during the classification process, different it occurs with the second category, where the values suffers modifications during the classification process. In this work an analysis will be made to verify if the use of the weights, statics as much as dynamics, they can increase the perfomance of the MCS s in comparison with the individually systems. Moreover, will be made an analysis in the diversity gotten for the MCS s, for this mode verify if it has some relation between the use of the weights in the MCS s with different levels of diversity

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work performs an algorithmic study of optimization of a conformal radiotherapy plan treatment. Initially we show: an overview about cancer, radiotherapy and the physics of interaction of ionizing radiation with matery. A proposal for optimization of a plan of treatment in radiotherapy is developed in a systematic way. We show the paradigm of multicriteria problem, the concept of Pareto optimum and Pareto dominance. A generic optimization model for radioterapic treatment is proposed. We construct the input of the model, estimate the dose given by the radiation using the dose matrix, and show the objective function for the model. The complexity of optimization models in radiotherapy treatment is typically NP which justifyis the use of heuristic methods. We propose three distinct methods: MOGA, MOSA e MOTS. The project of these three metaheuristic procedures is shown. For each procedures follows: a brief motivation, the algorithm itself and the method for tuning its parameters. The three method are applied to a concrete case and we confront their performances. Finally it is analyzed for each method: the quality of the Pareto sets, some solutions and the respective Pareto curves

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho apresenta uma extensão do provador haRVey destinada à verificação de obrigações de prova originadas de acordo com o método B. O método B de desenvolvimento de software abrange as fases de especificação, projeto e implementação do ciclo de vida do software. No contexto da verificação, destacam-se as ferramentas de prova Prioni, Z/EVES e Atelier-B/Click n Prove. Elas descrevem formalismos com suporte à checagem satisfatibilidade de fórmulas da teoria axiomática dos conjuntos, ou seja, podem ser aplicadas ao método B. A checagem de SMT consiste na checagem de satisfatibilidade de fórmulas da lógica de primeira-ordem livre de quantificadores dada uma teoria decidível. A abordagem de checagem de SMT implementada pelo provador automático de teoremas haRVey é apresentada, adotando-se a teoria dos vetores que não permite expressar todas as construções necessárias às especificações baseadas em conjuntos. Assim, para estender a checagem de SMT para teorias dos conjuntos destacam-se as teorias dos conjuntos de Zermelo-Frankel (ZFC) e de von Neumann-Bernays-Gödel (NBG). Tendo em vista que a abordagem de checagem de SMT implementada no haRVey requer uma teoria finita e pode ser estendida para as teorias nãodecidíveis, a teoria NBG apresenta-se como uma opção adequada para a expansão da capacidade dedutiva do haRVey à teoria dos conjuntos. Assim, através do mapeamento dos operadores de conjunto fornecidos pela linguagem B a classes da teoria NBG, obtem-se uma abordagem alternativa para a checagem de SMT aplicada ao método B

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of clustering methods for the discovery of cancer subtypes has drawn a great deal of attention in the scientific community. While bioinformaticians have proposed new clustering methods that take advantage of characteristics of the gene expression data, the medical community has a preference for using classic clustering methods. There have been no studies thus far performing a large-scale evaluation of different clustering methods in this context. This work presents the first large-scale analysis of seven different clustering methods and four proximity measures for the analysis of 35 cancer gene expression data sets. Results reveal that the finite mixture of Gaussians, followed closely by k-means, exhibited the best performance in terms of recovering the true structure of the data sets. These methods also exhibited, on average, the smallest difference between the actual number of classes in the data sets and the best number of clusters as indicated by our validation criteria. Furthermore, hierarchical methods, which have been widely used by the medical community, exhibited a poorer recovery performance than that of the other methods evaluated. Moreover, as a stable basis for the assessment and comparison of different clustering methods for cancer gene expression data, this study provides a common group of data sets (benchmark data sets) to be shared among researchers and used for comparisons with new methods

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The intervalar arithmetic well-known as arithmetic of Moore, doesn't possess the same properties of the real numbers, and for this reason, it is confronted with a problem of operative nature, when we want to solve intervalar equations as extension of real equations by the usual equality and of the intervalar arithmetic, for this not to possess the inverse addictive, as well as, the property of the distributivity of the multiplication for the sum doesn t be valid for any triplet of intervals. The lack of those properties disables the use of equacional logic, so much for the resolution of an intervalar equation using the same, as for a representation of a real equation, and still, for the algebraic verification of properties of a computational system, whose data are real numbers represented by intervals. However, with the notion of order of information and of approach on intervals, introduced by Acióly[6] in 1991, the idea of an intervalar equation appears to represent a real equation satisfactorily, since the terms of the intervalar equation carry the information about the solution of the real equation. In 1999, Santiago proposed the notion of simple equality and, later on, local equality for intervals [8] and [33]. Based on that idea, this dissertation extends Santiago's local groups for local algebras, following the idea of Σ-algebras according to (Hennessy[31], 1988) and (Santiago[7], 1995). One of the contributions of this dissertation, is the theorem 5.1.3.2 that it guarantees that, when deducing a local Σ-equation E t t in the proposed system SDedLoc(E), the interpretations of t and t' will be locally the same in any local Σ-algebra that satisfies the group of fixed equations local E, whenever t and t have meaning in A. This assures to a kind of safety between the local equacional logic and the local algebras

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the emergence of other forms of artificial lift, sucker rod pumping systems remains hegemonic because of its flexibility of operation and lower investment cost compared to other lifting techniques developed. A successful rod pumping sizing necessarily passes through the supply of estimated flow and the controlled wear of pumping equipment used in the mounted configuration. However, the mediation of these elements is particularly challenging, especially for most designers dealing with this work, which still lack the experience needed to get good projects pumping in time. Even with the existence of various computer applications on the market in order to facilitate this task, they must face a grueling process of trial and error until you get the most appropriate combination of equipment for installation in the well. This thesis proposes the creation of an expert system in the design of sucker rod pumping systems. Its mission is to guide a petroleum engineer in the task of selecting a range of equipment appropriate to the context provided by the characteristics of the oil that will be raised to the surface. Features such as the level of gas separation, presence of corrosive elements, possibility of production of sand and waxing are taken into account in selecting the pumping unit, sucker-rod strings and subsurface pump and their operation mode. It is able to approximate the inferente process in the way of human reasoning, which leads to results closer to those obtained by a specialist. For this, their production rules were based on the theory of fuzzy sets, able to model vague concepts typically present in human reasoning. The calculations of operating parameters of the pumping system are made by the API RP 11L method. Based on information input, the system is able to return to the user a set of pumping configurations that meet a given design flow, but without subjecting the selected equipment to an effort beyond that which can bear

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Classifier ensembles are systems composed of a set of individual classifiers and a combination module, which is responsible for providing the final output of the system. In the design of these systems, diversity is considered as one of the main aspects to be taken into account since there is no gain in combining identical classification methods. The ideal situation is a set of individual classifiers with uncorrelated errors. In other words, the individual classifiers should be diverse among themselves. One way of increasing diversity is to provide different datasets (patterns and/or attributes) for the individual classifiers. The diversity is increased because the individual classifiers will perform the same task (classification of the same input patterns) but they will be built using different subsets of patterns and/or attributes. The majority of the papers using feature selection for ensembles address the homogenous structures of ensemble, i.e., ensembles composed only of the same type of classifiers. In this investigation, two approaches of genetic algorithms (single and multi-objective) will be used to guide the distribution of the features among the classifiers in the context of homogenous and heterogeneous ensembles. The experiments will be divided into two phases that use a filter approach of feature selection guided by genetic algorithm