18 resultados para Rings of differential operators
Resumo:
Atualmente, há diferentes definições de implicações fuzzy aceitas na literatura. Do ponto de vista teórico, esta falta de consenso demonstra que há discordâncias sobre o real significado de "implicação lógica" nos contextos Booleano e fuzzy. Do ponto de vista prático, isso gera dúvidas a respeito de quais "operadores de implicação" os engenheiros de software devem considerar para implementar um Sistema Baseado em Regras Fuzzy (SBRF). Uma escolha ruim destes operadores pode implicar em SBRF's com menor acurácia e menos apropriados aos seus domínios de aplicação. Uma forma de contornar esta situação e conhecer melhor os conectivos lógicos fuzzy. Para isso se faz necessário saber quais propriedades tais conectivos podem satisfazer. Portanto, a m de corroborar com o significado de implicação fuzzy e corroborar com a implementação de SBRF's mais apropriados, várias leis Booleanas têm sido generalizadas e estudadas como equações ou inequações nas lógicas fuzzy. Tais generalizações são chamadas de leis Boolean-like e elas não são comumente válidas em qualquer semântica fuzzy. Neste cenário, esta dissertação apresenta uma investigação sobre as condições suficientes e necessárias nas quais três leis Booleanlike like — y ≤ I(x, y), I(x, I(y, x)) = 1 e I(x, I(y, z)) = I(I(x, y), I(x, z)) — se mantém válidas no contexto fuzzy, considerando seis classes de implicações fuzzy e implicações geradas por automorfismos. Além disso, ainda no intuito de implementar SBRF's mais apropriados, propomos uma extensão para os mesmos
Resumo:
Human mesenchymal stem cells (MSC) are powerful sources for cell therapy in regenerative medicine. The long time cultivation can result in replicative senescence or can be related to the emergence of chromosomal alterations responsible for the acquisition of tumorigenesis features in vitro. In this study, for the first time, the expression profile of MSC with a paracentric chromosomal inversion (MSC/inv) was compared to normal karyotype (MSC/n) in early and late passages. Furthermore, we compared the transcriptome of each MSC in early passages with late passages. MSC used in this study were obtained from the umbilical vein of three donors, two MSC/n and one MSC/inv. After their cryopreservation, they have been expanded in vitro until reached senescence. Total RNA was extracted using the RNeasy mini kit (Qiagen) and marked with the GeneChip ® 3 IVT Express Kit (Affymetrix Inc.). Subsequently, the fragmented aRNA was hybridized on the microarranjo Affymetrix Human Genome U133 Plus 2.0 arrays (Affymetrix Inc.). The statistical analysis of differential gene expression was performed between groups MSC by the Partek Genomic Suite software, version 6.4 (Partek Inc.). Was considered statistically significant differences in expression to p-value Bonferroni correction ˂.01. Only signals with fold change ˃ 3.0 were included in the list of differentially expressed. Differences in gene expression data obtained from microarrays were confirmed by Real Time RT-PCR. For the interpretation of biological expression data were used: IPA (Ingenuity Systems) for analysis enrichment functions, the STRING 9.0 for construction of network interactions; Cytoscape 2.8 to the network visualization and analysis bottlenecks with the aid of the GraphPad Prism 5.0 software. BiNGO Cytoscape pluggin was used to access overrepresentation of Gene Ontology categories in Biological Networks. The comparison between senescent and young at each group of MSC has shown that there is a difference in the expression parttern, being higher in the senescent MSC/inv group. The results also showed difference in expression profiles between the MSC/inv versus MSC/n, being greater when they are senescent. New networks were identified for genes related to the response of two of MSC over cultivation time. Were also identified genes that can coordinate functional categories over represented at networks, such as CXCL12, SFRP1, xvi EGF, SPP1, MMP1 e THBS1. The biological interpretation of these data suggests that the population of MSC/inv has different constitutional characteristics, related to their potential for differentiation, proliferation and response to stimuli, responsible for a distinct process of replicative senescence in MSC/inv compared to MSC/n. The genes identified in this study are candidates for biomarkers of cellular senescence in MSC, but their functional relevance in this process should be evaluated in additional in vitro and/or in vivo assays
Resumo:
With the advances in medicine, life expectancy of the world population has grown considerably in recent decades. Studies have been performed in order to maintain the quality of life through the development of new drugs and new surgical procedures. Biomaterials is an example of the researches to improve quality of life, and its use goes from the reconstruction of tissues and organs affected by diseases or other types of failure, to use in drug delivery system able to prolong the drug in the body and increase its bioavailability. Biopolymers are a class of biomaterials widely targeted by researchers since they have ideal properties for biomedical applications, such as high biocompatibility and biodegradability. Poly (lactic acid) (PLA) is a biopolymer used as a biomaterial and its monomer, lactic acid, is eliminated by the Krebs Cycle (citric acid cycle). It is possible to synthesize PLA through various synthesis routes, however, the direct polycondensation is cheaper due the use of few steps of polymerization. In this work we used experimental design (DOE) to produce PLAs with different molecular weight from the direct polycondensation of lactic acid, with characteristics suitable for use in drug delivery system (DDS). Through the experimental design it was noted that the time of esterification, in the direct polycondensation, is the most important stage to obtain a higher molecular weight. The Fourier Transform Infrared (FTIR) spectrograms obtained were equivalent to the PLAs available in the literature. Results of Differential Scanning Calorimetry (DSC) showed that all PLAs produced are semicrystalline with glass transition temperatures (Tgs) ranging between 36 - 48 °C, and melting temperatures (Tm) ranging from 117 to 130 °C. The PLAs molecular weight characterized from Size Exclusion Chromatography (SEC), varied from 1000 to 11,000 g/mol. PLAs obtained showed a fibrous morphology characterized by Scanning Electron Microscopy (SEM)
Resumo:
The aim of the present study is to reevaluate the logical thought of the English mathematician George Boole (1815 - 1864). Thus, our research centers on the mathematical analysis of logic in the context of the history of mathematics. In order to do so, we present various biographical considerations about Boole in the light of events that happened in the 19th century and their consequences for mathematical production. We briefly describe Boole's innovations in the areas of differential equations and invariant theory and undertake an analysis of Boole's logic, especially as formulated in the book The Mathematical Analysis of Logic, comparing it not only with the traditional Aristotelian logic, but also with modern symbolic logic. We conclude that Boole, as he intended, expanded logic both in terms of its content and also in terms of its methods and formal elaboration. We further conclude that his purpose was the mathematical modeling of deductive reasoning, which led him to present an innovative formalism for logic and, because the different ways it can be interpreted, a new conception of mathematics
Resumo:
This article refers to a research which tries to historically (re)construct the conceptual development of the Integral and Differential calculus, taking into account its constructing model feature, since the Greeks to Newton. These models were created by the problems that have been proposed by the history and were being modified by the time the new problems were put and the mathematics known advanced. In this perspective, I also show how a number of nature philosophers and mathematicians got involved by this process. Starting with the speculations over scientific and philosophical natures done by the ancient Greeks, it culminates with Newton s work in the 17th century. Moreover, I present and analyze the problems proposed (open questions), models generated (questions answered) as well as the religious, political, economic and social conditions involved. This work is divided into 6 chapters plus the final considerations. Chapter 1 shows how the research came about, given my motivation and experience. I outline the ways I have gone trough to refine the main question and present the subject of and the objectives of the research, ending the chapter showing the theoretical bases by which the research was carried out, naming such bases as Investigation Theoretical Fields (ITF). Chapter 2 presents each one of the theoretical bases, which was introduced in the chapter 1 s end. In this discuss, I try to connect the ITF to the research. The Chapter 3 discusses the methodological choices done considering the theoretical fields considered. So, the Chapters 4, 5 and 6 present the main corpus of the research, i.e., they reconstruct the calculus history under a perspective of model building (questions answered) from the problems given (open questions), analyzing since the ancient Greeks contribution (Chapter 4), pos- Greek, especially, the Romans contribution, Hindus, Arabian, and the contribution on the Medium Age (Chapter 5). I relate the European reborn and the contribution of the philosophers and scientists until culminate with the Newton s work (Chapter 6). In the final considerations, it finally gives an account on my impressions about the development of the research as well as the results reached here. By the end, I plan out a propose of curse of Differential and Integral Calculus, having by basis the last three chapters of the article
Resumo:
We propose a multi-resolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen s self-organizing map. Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multi-resolution, iterative scheme. Reconstruction was experimented with several point sets, induding different shapes and sizes. Results show generated meshes very dose to object final shapes. We include measures of performance and discuss robustness.
Resumo:
In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables
Resumo:
This work aims at the implementation and adaptation of a computational model for the study of the Fischer-Tropsch reaction in a slurry bed reactor from synthesis gas (CO+H2) for the selective production of hydrocarbons (CnHm), with emphasis on evaluation of the influence of operating conditions on the distribution of products formed during the reaction.The present model takes into account effects of rigorous phase equilibrium in a reactive flash drum, a detailed kinetic model able of predicting the formation of each chemical species of the reaction system, as well as control loops of the process variables for pressure and level of slurry phase. As a result, a system of Differential Algebraic Equations was solved using the computational code DASSL (Petzold, 1982). The consistent initialization for the problem was based on phase equilibrium formed by the existing components in the reactor. In addition, the index of the system was reduced to 1 by the introduction of control laws that govern the output of the reactor products. The results were compared qualitatively with experimental data collected in the Fischer-Tropsch Synthesis plant installed at Laboratório de Processamento de Gás - CTGÁS-ER-Natal/RN
Resumo:
The objective of this work was the development and improvement of the mathematical models based on mass and heat balances, representing the drying transient process fruit pulp in spouted bed dryer with intermittent feeding. Mass and energy balance for drying, represented by a system of differential equations, were developed in Fortran language and adapted to the condition of intermittent feeding and mass accumulation. Were used the DASSL routine (Differential Algebraic System Solver) for solving the differential equation system and used a heuristic optimization algorithm in parameter estimation, the Particle Swarm algorithm. From the experimental data food drying, the differential models were used to determine the quantity of water and the drying air temperature at the exit of a spouted bed and accumulated mass of powder in the dryer. The models were validated using the experimental data of drying whose operating conditions, air temperature, flow rate and time intermittency, varied within the limits studied. In reviewing the results predicted, it was found that these models represent the experimental data of the kinetics of production and accumulation of powder and humidity and air temperature at the outlet of the dryer
Resumo:
This research investigates current sense effects at the use of linguistic resources of the argument in a corpus constituted by juridical pieces (Initial Petitions), that gave opportunity to actions originated from the Civil Special Court of the District of Currais Novos-RN. For this purpose it was established a relation between the Law and the Linguistics, mediated by the focus of the Argumentative Semantics, emphasizing, in a special way, the use of argumentative operators, which inserted in the own language, in its grammar, assume the orientation of the speech and the modalizers use, important mechanisms in the construction of the sense of the text and in the signalling in the way as that that one say is said,. This way, we began the investigation of that gender choosing as study object the section of the facts , that comprehends a part of Initial Petition where is explanted the narration of events that gave margin to the proposal for the Action. In face of the study object and the aim to be reached it was appealed, methodologically, to the notion of Rhetoric since from the classic antiquity to the emergence of the New present Rhetoric in Perelman and Olbrechts-Tyteca (2005) that, at the present time, is inserted in the studies of the Pragmatic connected to the central theses of the Ducrot s thinking (1977, 1980, 1987). Such referential allowed us to a better understanding about the production of the juridical speech on the part of the operators of the Law, as well as, to analyze in way wide the current sense effects from the use of argument linguistic marks the juridical speech. The data showed that such marks are indispensable elements to the construction of the textual web, particularly when in the range of the juridical argumentation, since they direct the speech for certain conclusions. However, we have observed that in the texts produced by the lawyers the use of those linguistic resources not always takes place in an appropriate way. The texts analyzed have also showed that it is possible to unmask, through the linguistic resources, the argumentative strategy employed by the authors for convincing of the magistrate, making evident that language is more than a system of signs, which it makes possible to see beyond the limit of the words and statements. Finally, we have verified that the categories analyzed, when used appropriately, are elements that engender argumentative maneuvers of effectiveness in the juridical text, being fundamental pieces which give argumentative strength the text, making the speech to move forward, not only the juridical, but the speech produced in any domain of the knowledge
Resumo:
Stellar differential rotation is an important key to understand hydromagnetic stellar dynamos, instabilities, and transport processes in stellar interiors as well as for a better treatment of tides in close binary and star-planet systems. The space-borne high-precision photometry with MOST, CoRoT, and Kepler has provided large and homogeneous datasets. This allows, for the first time, the study of differential rotation statistically robust samples covering almost all stages of stellar evolution. In this sense, we introduce a method to measure a lower limit to the amplitude of surface differential rotation from high-precision evenly sampled photometric time series such as those obtained by space-borne telescopes. It is designed for application to main-sequence late-type stars whose optical flux modulation is dominated by starspots. An autocorrelation of the time series is used to select stars that allow an accurate determination of spot rotation periods. A simple two-spot model is applied together with a Bayesian Information Criterion to preliminarily select intervals of the time series showing evidence of differential rotation with starspots of almost constant area. Finally, the significance of the differential rotation detection and a measurement of its amplitude and uncertainty are obtained by an a posteriori Bayesian analysis based on a Monte Carlo Markov Chain (hereafter MCMC) approach. We apply our method to the Sun and eight other stars for which previous spot modelling has been performed to compare our results with previous ones. The selected stars are of spectral type F, G and K. Among the main results of this work, We find that autocorrelation is a simple method for selecting stars with a coherent rotational signal that is a prerequisite to a successful measurement of differential rotation through spot modelling. For a proper MCMC analysis, it is necessary to take into account the strong correlations among different parameters that exists in spot modelling. For the planethosting star Kepler-30, we derive a lower limit to the relative amplitude of the differential rotation. We confirm that the Sun as a star in the optical passband is not suitable for a measurement of the differential rotation owing to the rapid evolution of its photospheric active regions. In general, our method performs well in comparison with more sophisticated procedures used until now in the study of stellar differential rotation
Resumo:
The study present analyzes the relation between work accident and human values. It was developed with the sample of 156 operators of a factory, through to an application of structured questionnaires. The data were submitted to quantitative analyses (for example, analyses of frequency distributions, Chi-Square, test t). It was verified that 27 employees that filled out the questionnaires suffered work accidents. The results evidence that there aren´t significant differences between the people s values that suffered work accidents and those that did not suffer. The employees presented a hierarchy of different values comparing with the others Brazilian studies. It was observed that the work accidents varies for organizational sectors. So, we get the conclusion, the occurrence of the work accidents is not associated to the values, but they are probably associated to work conditions
Resumo:
The importance of identifying the consequence of the hours worked on people in society has been well recognized within Organizational and Work Psychology. From this point of view, the present research had the objective of analysing the effects of work regimes on the mental health of petroleum operators of Petrobrás. The sample totaled 144 subjects, corresponding to 27% of the work population. The mental health of the participants was evaluated using the following instruments of measurement: QSG-12, Scale of self-esteem, Scale of Positive and Negative Affections and the Scale of Valuable Attributes of IMST, each representing an empirical factor used to indicate and measure the five dimensions of mental health. The subjects perceptions of their work regime and the rest of their conditions of work were evaluated using scales of descriptive attributes of IMST, by applying a semi-structured questionnaire and by use of interviews. A socio-demographic file was used to collect information related to the biographical and socio-occupational profile of the worker sample. The answers to the questionnaire were inserted into the data bank of SPSS (Statistical Package for Social Science), for statistical analysis, and the interviews were analised based on the technique of Contents Analysis recommended by Bardin (1995). The main results revealed that one third of the worker sample were tense; however, the mental health of the majority was preserved. Cluster Analysis applied to the group of seven factors which measured the five dimensions of mental health identified four profiles of psychological well-being shared between members of the sample. It was observed that the people working in the system of Continuous Shift Alternation (TIR) and in the system of Pre-advising tended to present balanced and satisfactory profiles, while the ones which worked in the Administrative Field tended to present anxious and oscillating profiles, and thus were more affected psychologically. These were also the ones that also perceived the more negative aspects of their laborious conditions (reduced chances of self-improvement, physically stressful and financial resources below expectations with which to supply family and personal necessities. In agreement with the ecological model formulated by Warr (1987), the present study concluded that the positive and negative effects on the psychological well-being tended to occur as a consequence of the perceptions the petrol operators developed to face their work conditions
Resumo:
In this paper, the technique of differential pulse voltammetry (DPV) has been studied for monitoring the concentration of oxalic acid (OA) during their electrochemical oxidation (EO) in acidic medium using platinum anode supported on titanium (Ti / Pt). The DPV was standardized and optimized using a glassy carbon electrode modified with cysteine. The modification with cysteine was developed electrochemically, forming a polymeric film on the surface of the glassy carbon electrode. The formation of the polymer film was confirmed by analysis of scanning electron microscope and atomic force microscope, confirming the modification of the electrode. The electrochemical degradation was developed using different current densities 10, 20 30 and 40 mA cm -2 electrode with Ti / Pt observing the degradation of oxalic acid, and monitored using the method of KMnO4 titration. However, the analyzes with DPV showed the same behavior elimination of oxalic acid titration. Compared with the titration method classical observed and DPV could be a good fit, confidence limits of detection and confirming the applicability of the technique electroanalytical for monitoring the degradation of oxalic acid
Resumo:
In this work a study was done using electrochemical cyclic voltammetry and differential pulse voltammetry for isoniazida (INH), ethambutol (EMB), rifampicina (RIF) and pyrazinamide (PZA) using the electrode boron-doped diamond (BDD) as working electrode. It also verified the applicability of the technique of differential pulse voltammetry in the quantification of the active compounds used in the treatment of tuberculosis, subsequently applying in samples of pharmaceutical formulation. Among the four active compounds studied, isoniazid showed the best results for the detection and quantification using differential pulse voltammetry. At pH 4 and pH 8, for the calibration curves to INH showed good linearity, with quantification limits of 6.15 mmol L-1 (0,844 ppm) and 4.08 mmol L-1 (0.560 ppm) for the respective pH. The proposed method can be used to determine drug isoniazid, for recovery values were obtained in approximately 100%