818 resultados para fuzzy rule base models
Resumo:
Porphyrins have been the center of numerous investigations in different areas of chemistry, geochemistry, and the life sciences. In nature the conformation of the porphyrin macrocycle varies, depending on the function of its apoenzyme. It is believed that the conformation of the porphyrin ring is necessary for the enzyme to achieve its function and modify its reactivity. It is important to understand how the conformation of the porphyrin ring will influence its properties. In synthetic porphyrins particular conformations and ring deformations can be achieved by peripheral substitution, metallation, core substitution, and core protonation among other alterations of the macrocycle. The macrocyclic distortions will affect the ring current, the ability of pyrroles to intramolecularly hydrogen bond and the relative basicity of each of the porphyrins. To understand these effects different theoretical models are used. The ground state structure of each of 19 free base porphyrins is determined using molecular mechanics (MM+) and semiempirical methods (PM3). The energetics of deformation of the macrocyclic core is calculated by carrying out single point energy calculations for the conformation achieved by each synthetic compound. Enthalpies of solution and enthalpies of protonation of 10 porphyrins with varying degrees of macrocyclic deformation and varying electron withdrawing groups in the periphery are determined using solution calorimetry. Using Hess's Law, the relative basicity of each of the different free base porphyrins is calculated. NMR results are described, including the determination of free energies of activation of ring tautomerization and hydrogen bonding for several compounds. It was found that in the absence of electronic effects, the greater macrocyclic deformation, the greater the basicity of the porphyrins. This basicity is attenuated by the presence of electron withdrawing groups and ability to of the macrocycle to intramolecularly hydrogen bond.
Resumo:
The principal effluent in the oil industry is the produced water, which is commonly associated to the produced oil. It presents a pronounced volume of production and it can be reflected on the environment and society, if its discharge is unappropriated. Therefore, it is indispensable a valuable careful to establish and maintain its management. The traditional treatment of produced water, usualy includes both tecniques, flocculation and flotation. At flocculation processes, there are traditional floculant agents that aren’t well specified by tecnichal information tables and still expensive. As for the flotation process, it’s the step in which is possible to separate the suspended particles in the effluent. The dissolved air flotation (DAF) is a technique that has been consolidating economically and environmentally, presenting great reliability when compared with other processes. The DAF is presented as a process widely used in various fields of water and wastewater treatment around the globe. In this regard, this study was aimed to evaluate the potential of an alternative natural flocculant agent based on Moringa oleifera to reduce the amount of oil and grease (TOG) in produced water from the oil industry by the method of flocculation/DAF. the natural flocculant agent was evaluated by its efficacy, as well as its efficiency when compared with two commercial flocculant agents normally used by the petroleum industry. The experiments were conducted following an experimental design and the overall efficiencies for all flocculants were treated through statistical calculation based on the use of STATISTICA software version 10.0. Therefore, contour surfaces were obtained from the experimental design and were interpreted in terms of the response variable removal efficiency TOG (total oil and greases). The plan still allowed to obtain mathematical models for calculating the response variable in the studied conditions. Commercial flocculants showed similar behavior, with an average overall efficiency of 90% for oil removal, however it is the economical analysis the decisive factor to choose one of these flocculant agents to the process. The natural alternative flocculant agent based on Moringa oleifera showed lower separation efficiency than those of commercials one (average 70%), on the other hand this flocculant causes less environmental impacts and it´s less expensive
Resumo:
The great interest in nonlinear system identification is mainly due to the fact that a large amount of real systems are complex and need to have their nonlinearities considered so that their models can be successfully used in applications of control, prediction, inference, among others. This work evaluates the application of Fuzzy Wavelet Neural Networks (FWNN) to identify nonlinear dynamical systems subjected to noise and outliers. Generally, these elements cause negative effects on the identification procedure, resulting in erroneous interpretations regarding the dynamical behavior of the system. The FWNN combines in a single structure the ability to deal with uncertainties of fuzzy logic, the multiresolution characteristics of wavelet theory and learning and generalization abilities of the artificial neural networks. Usually, the learning procedure of these neural networks is realized by a gradient based method, which uses the mean squared error as its cost function. This work proposes the replacement of this traditional function by an Information Theoretic Learning similarity measure, called correntropy. With the use of this similarity measure, higher order statistics can be considered during the FWNN training process. For this reason, this measure is more suitable for non-Gaussian error distributions and makes the training less sensitive to the presence of outliers. In order to evaluate this replacement, FWNN models are obtained in two identification case studies: a real nonlinear system, consisting of a multisection tank, and a simulated system based on a model of the human knee joint. The results demonstrate that the application of correntropy as the error backpropagation algorithm cost function makes the identification procedure using FWNN models more robust to outliers. However, this is only achieved if the gaussian kernel width of correntropy is properly adjusted.
Resumo:
The main aim of this investigation is to propose the notion of uniform and strong primeness in fuzzy environment. First, it is proposed and investigated the concept of fuzzy strongly prime and fuzzy uniformly strongly prime ideal. As an additional tool, the concept of t/m systems for fuzzy environment gives an alternative way to deal with primeness in fuzzy. Second, a fuzzy version of correspondence theorem and the radical of a fuzzy ideal are proposed. Finally, it is proposed a new concept of prime ideal for Quantales which enable us to deal with primeness in a noncommutative setting.
Resumo:
The main aim of this investigation is to propose the notion of uniform and strong primeness in fuzzy environment. First, it is proposed and investigated the concept of fuzzy strongly prime and fuzzy uniformly strongly prime ideal. As an additional tool, the concept of t/m systems for fuzzy environment gives an alternative way to deal with primeness in fuzzy. Second, a fuzzy version of correspondence theorem and the radical of a fuzzy ideal are proposed. Finally, it is proposed a new concept of prime ideal for Quantales which enable us to deal with primeness in a noncommutative setting.
Resumo:
Parent-mediated early intervention programs depend on the willingness and ability of parents to complete prescribed activities with their children. In other contexts, internal factors, such as stages of change, and external factors, such as barriers to treatment, have been shown to correlate with adherence to service. This researcher modified the Stages of Change Questionnaire as well as the Barriers to Treatment Participation Scale (BTPS) to use with this population. Despite initial interest, twenty-three parent participants were referred to the researcher over the course of three years, with only five parents taking part in the study. A population base ten times that of the current sample would be required recruit enough participants (fifty-one) to provide sufficient power. This feasibility study discusses the results of the five parent participants. Findings suggest that the modified Stages of Change Questionnaire may not be sensitive enough for use with the current sample, while the modified BTPS may yield useful information for service providers.
Resumo:
Recentemente sono stati valutati come fisicamente consistenti diversi modelli non-hermitiani sia in meccanica quantistica che in teoria dei campi. La classe dei modelli pseudo-hermitiani, infatti, si adatta ad essere usata per la descrizione di sistemi fisici dal momento che, attraverso un opportuno operatore metrico, risulta possibile ristabilire una struttura hermitiana ed unitaria. I sistemi PT-simmetrici, poi, sono una categoria particolarmente studiata in letteratura. Gli esempi riportati sembrano suggerire che anche le cosiddette teorie conformi non-unitarie appartengano alla categoria dei modelli PT-simmetrici, e possano pertanto adattarsi alla descrizione di fenomeni fisici. In particolare, si tenta qui la costruzione di determinate lagrangiane Ginzburg-Landau per alcuni modelli minimali non-unitari, sulla base delle identificazioni esistenti per quanto riguarda i modelli minimali unitari. Infine, si suggerisce di estendere il dominio del noto teorema c alla classe delle teorie di campo PT-simmetriche, e si propongono alcune linee per una possibile dimostrazione dell'ipotizzato teorema c_{eff}.
Resumo:
This work presents discussions on the teaching of Chemical Bonds in high school and some implications of this approach in learning chemistry by students. In general, understanding how the chemicals combine to form substances and compounds, it is a key point for understanding the properties of substances and their structure. In this sense, the chemical bonds represent an extremely important issue, and their knowledge is essential for a better understanding of the changes occurring in our world. Despite these findings, it is observed that the way in which this concept is discussed in chemistry class has contributed, paradoxically, to the emergence of several alternative designs, making the understanding of the subject by students. It is believed that one of the explanations for these observations is the exclusive use of the "octet rule" as an explanatory model for the Chemical Bonds. The use of such a model over time eventually replace chemical principles that gave rise to it, transforming knowledge into a series of uninteresting rituals and even confusing for students. Based on these findings, it is deemed necessary a reformulation in the way to approach this content in the classroom, taking into account especially the fact that the explanations of the formation of substances should be based on the energy concept, which is fundamental to understanding how atoms combine. Thus, the main question of the survey and described here of the following question: Can the development of an explanatory model for the Chemical Bonds in high school based on the concept of energy and without the need to use the "octet rule"? Based on the concepts and methodologies of modeling activity, we sought the development of a teaching model was made through Teaching Units designed to give subsidies to high school teachers to address the chemical bonds through the concept of energy. Through this work it is intended to make the process of teaching and learning of Chemical Bonds content becomes more meaningful to students, developing models that contribute to the learning of this and hence other basic fundamentals of chemistry.
Resumo:
This work presents discussions on the teaching of Chemical Bonds in high school and some implications of this approach in learning chemistry by students. In general, understanding how the chemicals combine to form substances and compounds, it is a key point for understanding the properties of substances and their structure. In this sense, the chemical bonds represent an extremely important issue, and their knowledge is essential for a better understanding of the changes occurring in our world. Despite these findings, it is observed that the way in which this concept is discussed in chemistry class has contributed, paradoxically, to the emergence of several alternative designs, making the understanding of the subject by students. It is believed that one of the explanations for these observations is the exclusive use of the "octet rule" as an explanatory model for the Chemical Bonds. The use of such a model over time eventually replace chemical principles that gave rise to it, transforming knowledge into a series of uninteresting rituals and even confusing for students. Based on these findings, it is deemed necessary a reformulation in the way to approach this content in the classroom, taking into account especially the fact that the explanations of the formation of substances should be based on the energy concept, which is fundamental to understanding how atoms combine. Thus, the main question of the survey and described here of the following question: Can the development of an explanatory model for the Chemical Bonds in high school based on the concept of energy and without the need to use the "octet rule"? Based on the concepts and methodologies of modeling activity, we sought the development of a teaching model was made through Teaching Units designed to give subsidies to high school teachers to address the chemical bonds through the concept of energy. Through this work it is intended to make the process of teaching and learning of Chemical Bonds content becomes more meaningful to students, developing models that contribute to the learning of this and hence other basic fundamentals of chemistry.
Resumo:
This research analyzes the average previous stressed vowels [ε] and [e] and later [ɔ] and [o] in nominal and verbal forms in the 1st person singular and 3rd person singular and plural in the present tense, specifically the umlaut process of mid vowels /e/ and /o/, which assimilate in /ε/ and /ᴐ/ in stressed position. The general objective of this research is to describe and quantify the occurrence of umlaut and subsequently analyze in which words there is regularity or not. As specific objectives we have: i) to compile and to label an oral, spontaneous, synchronic and regional corpus, from radio programs produced in the city of Ituiutaba, Minas Gerais; ii) to describe the characteristics of the corpus to be compiled; iii) to investigate the alternating timbre of mid vowels in stressed position; iv) to identify instances of nominal and verbal umlaut of the middle vowels in stressed position; v) to describe the identified cases of nominal and verbal umlaut; vi) to analyze the probable causes for the variation of the middle vowels. To perform the proposed analysis, we have adopted as a theoretical-methodological basis multi-representational models: Phonology of Use (BYBEE, 2001) and Exemplar Theory (PIERREHUMBERT, 2001) combined with the precepts of Corpus Linguistics (BEBER SARDINHA, 2004). The corpus consisted of 16 radio programs – eight political and eight religious – from the city of Ituiutaba-MG, with recordings of about 20 to 40 minutes. We note, by means of the results generated by WordSmith Tools® software, version 6.0 (SCOTT, 2012), that the analyzed forms show little variation, which shows that the umlaut is a process already lexicalized in participants of the radio programs analyzed. We conclude that the results converge with the proposal of the Phonology of Use (BYBEE, 2001; PHILLIPS, 1984) that less frequent words that have no phonetic environment conducive to change, are changed first.
Resumo:
In this work it was developed mathematical resolutions taking as parameter maximum intensity values for the interference analysis of electric and magnetic fields and was given two virtual computer system that supports families of CDMA and WCDMA technologies. The first family were developed computational resources to solve electric and magnetic field calculations and power densities in Radio Base stations , with the use of CDMA technology in the 800 MHz band , taking into account the permissible values referenced by the Commission International Protection on non-Ionizing Radiation . The first family is divided into two segments of calculation carried out in virtual operation. In the first segment to compute the interference field radiated by the base station with input information such as radio channel power; Gain antenna; Radio channel number; Operating frequency; Losses in the cable; Attenuation of direction; Minimum Distance; Reflections. Said computing system allows to quickly and without the need of implementing instruments for measurements, meet the following calculated values: Effective Radiated Power; Sector Power Density; Electric field in the sector; Magnetic field in the sector; Magnetic flux density; point of maximum permissible exposure of electric field and power density. The results are shown in charts for clarity of view of power density in the industry, as well as the coverage area definition. The computer module also includes folders specifications antennas, cables and towers used in cellular telephony, the following manufacturers: RFS World, Andrew, Karthein and BRASILSAT. Many are presented "links" network access "Internet" to supplement the cable specifications, antennas, etc. . In the second segment of the first family work with more variables , seeking to perform calculations quickly and safely assisting in obtaining results of radio signal loss produced by ERB . This module displays screens representing propagation systems denominated "A" and "B". By propagating "A" are obtained radio signal attenuation calculations in areas of urban models , dense urban , suburban , and rural open . In reflection calculations are present the reflection coefficients , the standing wave ratio , return loss , the reflected power ratio , as well as the loss of the signal by mismatch impedance. With the spread " B" seek radio signal losses in the survey line and not targeted , the effective area , the power density , the received power , the coverage radius , the conversion levels and the gain conversion systems radiant . The second family of virtual computing system consists of 7 modules of which 5 are geared towards the design of WCDMA and 2 technology for calculation of telephone traffic serving CDMA and WCDMA . It includes a portfolio of radiant systems used on the site. In the virtual operation of the module 1 is compute-: distance frequency reuse, channel capacity with noise and without noise, Doppler frequency, modulation rate and channel efficiency; Module 2 includes computes the cell area, thermal noise, noise power (dB), noise figure, signal to noise ratio, bit of power (dBm); with the module 3 reaches the calculation: breakpoint, processing gain (dB) loss in the space of BTS, noise power (w), chip period and frequency reuse factor. Module 4 scales effective radiated power, sectorization gain, voice activity and load effect. The module 5 performs the calculation processing gain (Hz / bps) bit time, bit energy (Ws). Module 6 deals with the telephone traffic and scales 1: traffic volume, occupancy intensity, average time of occupancy, traffic intensity, calls completed, congestion. Module 7 deals with two telephone traffic and allows calculating call completion and not completed in HMM. Tests were performed on the mobile network performance field for the calculation of data relating to: CINP , CPI , RSRP , RSRQ , EARFCN , Drop Call , Block Call , Pilot , Data Bler , RSCP , Short Call, Long Call and Data Call ; ECIO - Short Call and Long Call , Data Call Troughput . As survey were conducted surveys of electric and magnetic field in an ERB , trying to observe the degree of exposure to non-ionizing radiation they are exposed to the general public and occupational element. The results were compared to permissible values for health endorsed by the ICNIRP and the CENELEC .
Resumo:
The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.
This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.
Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
The Cutri Formation’s, type location, exposed in the NW of Mallorca, Spain has previously been described by Álvaro et al., (1989) and further interpreted by Abbots (1989) unpublished PhD thesis as a base-of-slope carbonate apron. Incorporating new field and laboratory analysis this paper enhances this interpretation. From this analysis, it can be shown without reasonable doubt that the Cutri Formation was deposited in a carbonate base-of-slope environment on the palaeowindward side of a Mid-Jurassic Tethyan platform. Key evidence such as laterally extensive exposures, abundant deposits of calciturbidtes and debris flows amongst hemipelagic deposits strongly support this interpretation.
Resumo:
This keynote presentation will report some of our research work and experience on the development and applications of relevant methods, models, systems and simulation techniques in support of different types and various levels of decision making for business, management and engineering. In particular, the following topics will be covered. Modelling, multi-agent-based simulation and analysis of the allocation management of carbon dioxide emission permits in China (Nanfeng Liu & Shuliang Li Agent-based simulation of the dynamic evolution of enterprise carbon assets (Yin Zeng & Shuliang Li) A framework & system for extracting and representing project knowledge contexts using topic models and dynamic knowledge maps: a big data perspective (Jin Xu, Zheng Li, Shuliang Li & Yanyan Zhang) Open innovation: intelligent model, social media & complex adaptive system simulation (Shuliang Li & Jim Zheng Li) A framework, model and software prototype for modelling and simulation for deshopping behaviour and how companies respond (Shawkat Rahman & Shuliang Li) Integrating multiple agents, simulation, knowledge bases and fuzzy logic for international marketing decision making (Shuliang Li & Jim Zheng Li) A Web-based hybrid intelligent system for combined conventional, digital, mobile, social media and mobile marketing strategy formulation (Shuliang Li & Jim Zheng Li) A hybrid intelligent model for Web & social media dynamics, and evolutionary and adaptive branding (Shuliang Li) A hybrid paradigm for modelling, simulation and analysis of brand virality in social media (Shuliang Li & Jim Zheng Li) Network configuration management: attack paradigms and architectures for computer network survivability (Tero Karvinen & Shuliang Li)