921 resultados para systems-based simulation
Resumo:
Glass ionomer cements (GICs) are largely employed in Dentistry for several applications, such as luting cements for the attachment of crowns, bridges, and orthodontic brackets as well as restorative materials. The development of new glass systems is very important in Dentistry to improve of the mechanical properties and chemical stability. The aim of this study is the preparation of two glass systems containing niobium in their compositions for use as GICs. Glass systems based on the composition SiO2,Al2O3-Nb2O5-CaO were prepared by chemical route at 700degreesC. The XRD and DTA results confirmed that the prepared materials are glasses. The structures of the obtained glasses were compared to commercial material using FTIR, Al-27 and Si-29 MAS-NMR. The analysis of FTIR and MAS-NMR spectra indicated that the systems developed and commercial material are formed by SiO4 and AlO4 linked tetrahedra. These structures are essential to get the set time control and to have cements. These results encourage further applications of the experimental glasses in the formation of GICs. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Glass formation has been investigated in binary systems based on antimony oxide as the main glass former: (100-x)Sb2O3-xWO3, (5 < x < 65), (100 - x)Sb2O3-xSbPO(4), (5 < x < 80) and (100 - x)Sb2O3-x[Sb(PO3)(3)](n), (10 < x < 40). Ternary systems derived from the Sb2O3-WO3 binary glass have also been studied: Sb2O3-WO3-BaF2 Sb2O3-WO3-NaF and Sb2O3-WO3-[Sb(PO3)(3)](n). Glass transition temperature ranges from 280 degreesC to 380 degreesC. It increases as the concentration in tungsten oxide or antimony phosphate increases. Refractive index is larger than 2. Tungsten-containing glasses are yellow in transmission and turn green at the largest WO3 content. Optical transmission and temperatures of glass transition, T-g, onset of the crystallization. T-x, and maximum of crystallization, T-p, have been measured using differential scanning calorimetry (DSC). These glasses have potential photonic applications. (C) 2001 Elsevier B.V. B.V. All rights reserved.
Resumo:
Systems based on artificial neural networks have high computational rates due to the use of a massive number of simple processing elements and the high degree of connectivity between these elements. Neural networks with feedback connections provide a computing model capable of solving a large class of optimization problems. This paper presents a novel approach for solving dynamic programming problems using artificial neural networks. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points which represent solutions (not necessarily optimal) for the dynamic programming problem. Simulated examples are presented and compared with other neural networks. The results demonstrate that proposed method gives a significant improvement.
Resumo:
Cryptographic systems are safe. However, the management of cryptographic keys of these systems is a tough task. They are usually protected by the use of password-based authentication mechanisms, which is a weak link on conventional cryptographic systems, as the passwords can be easily copied or stolen. The usage of a biometric approach for releasing the keys is an alternative to the password-based mechanisms. But just like passwords, we need mechanisms to keep the biometrical signal safe. One approach for such mechanism is to use biometrical key cryptography. The cryptographic systems based on the use of biometric characteristics as keys are called biometrical cryptographic systems. This article presents the implementation of Fuzzy Vault, a biometrical cryptographic system written in Java, along with its performance evaluation. Fuzzy Vault was tested on a real application using smartcards.
Resumo:
Distributed Generation, microgrid technologies, two-way communication systems, and demand response programs are issues that are being studied in recent years within the concept of smart grids. At some level of enough penetration, the Distributed Generators (DGs) can provide benefits for sub-transmission and transmission systems through the so-called ancillary services. This work is focused on the ancillary service of reactive power support provided by DGs, specifically Wind Turbine Generators (WTGs), with high level of impact on transmission systems. The main objective of this work is to propose an optimization methodology to price this service by determining the costs in which a DG incurs when it loses sales opportunity of active power, i.e, by determining the Loss of Opportunity Costs (LOC). LOC occur when more reactive power is required than available, and the active power generation has to be reduced in order to increase the reactive power capacity. In the optimization process, three objectives are considered: active power generation costs of DGs, voltage stability margin of the system, and losses in the lines of the network. Uncertainties of WTGs are reduced solving multi-objective optimal power flows in multiple probabilistic scenarios constructed by Monte Carlo simulations, and modeling the time series associated with the active power generation of each WTG via Fuzzy Logic and Markov Chains. The proposed methodology was tested using the IEEE 14 bus test system with two WTGs installed. © 2011 IEEE.
Resumo:
This paper presents a novel mathematical model for the transmission network expansion planning problem. Main idea is to consider phase-shifter (PS) transformers as a new element of the transmission system expansion together with other traditional components such as transmission lines and conventional transformers. In this way, PS are added in order to redistribute active power flows in the system and, consequently, to diminish the total investment costs due to new transmission lines. Proposed mathematical model presents the structure of a mixed-integer nonlinear programming (MINLP) problem and is based on the standard DC model. In this paper, there is also applied a specialized genetic algorithm aimed at optimizing the allocation of candidate components in the network. Results obtained from computational simulations carried out with IEEE-24 bus system show an outstanding performance of the proposed methodology and model, indicating the technical viability of using these nonconventional devices during the planning process. Copyright © 2012 Celso T. Miasaki et al.
Resumo:
Purpose: To comparatively and prospectively compare in a randomized clinical trial, dentin hypersensitivity after treatment with three in-office bleaching systems, based on hydrogen peroxide at different concentrations, with and without light source activation. Methods: 88 individuals were included according to inclusion and exclusion criteria. Subjects were randomly divided into the following three treatment groups: Group 1 was treated with three 15-minute applications of hydrogen peroxide at 15% with titanium dioxide (Lase Peroxide Lite) that was light-activated (Light Plus Whitening Lase) with five cycles of 1 minute and 30 seconds each cycle, giving a total treatment time of 45 minutes; Group 2 was treated with three 10-minute applications of hydrogen peroxide at 35% (Lase Peroxide Sensy), activated by light (LPWL) same activation cycles than Group 1, with a total treatment time of 30 minutes; Group 3 was treated with only one application for 45 minutes of hydrogen peroxide at 35% (Whitegold Office) without light activation. Each subject underwent one session of bleaching on the anterior teeth according to the manufacturers' instructions. Dentin sensitivity was recorded with a visual analogue scale (VAS) at baseline, immediately after, and at 7 and 30 days after treatment using a stimulus of an evaporative blowing triple syringe for 3 seconds on the upper central incisors from a distance of 1 cm. A Kruskal-Wallis test followed by Mann-Whitney test was performed for statistical analysis. Results: All groups showed increased sensitivity immediately after treatment. Group 1 displayed less changes relative to baseline with no significant differences (P= 0.104). At 7 and 30 days after treatment, a comparison of VAS values indicated no significant differences between all groups (P= 0.598 and 0.489, respectively).
Resumo:
This paper presents a novel time domain approach for Structural Health Monitoring (SHM) systems based on Electromechanical Impedance (EMI) principle and Principal Component Coefficients (PCC), also known as loadings. Differently of typical applications of EMI applied to SHM, which are based on computing the Frequency Response Function (FRF), in this work the procedure is based on the EMI principle but all analysis is conducted directly in time-domain. For this, the PCC are computed from the time response of PZT (Lead Zirconate Titanate) transducers bonded to the monitored structure, which act as actuator and sensor at the same time. The procedure is carried out exciting the PZT transducers using a wide band chirp signal and getting their time responses. The PCC are obtained in both healthy and damaged conditions and used to compute statistics indexes. Tests were carried out on an aircraft aluminum plate and the results have demonstrated the effectiveness of the proposed method making it an excellent approach for SHM applications. Finally, the results using EMI signals in both frequency and time responses are obtained and compared. © The Society for Experimental Mechanics 2014.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The present study aimed to identify Eimeria species in young and adult sheep raised under intensive and / or semi-intensive systems of a herd from Umuarama city, Parana State, Brazil using the traditional diagnostic methods and to correlate the infection level/types of infection in the different age/system in this herd. Fecal samples were collected from the rectum of 210 sheep and were subjected to laboratory analysis to differentiate the species. Furthermore, animals were observed to determine the occurrences of the clinical or subclinical forms of eimeriosis. Out of the 210 collected fecal samples, 147 (70%) were positive for Eimeria oocysts, and 101 (47.86%) belonged to young animals that were raised under intensive and / or semi-intensive farming systems. Oocysts from 9 species of Eimeria parasites were identified in the sheep at the following prevalence rates: E. crandallis, 50.0%; E. parva, 21.6%; E. faurei, 8.1%; E. ahsata, 8.1%; E. intricata, 5.4%; E. granulosa, 2.7%; E. ovinoidalis, 2.0%; E. ovina, 1.3%; and E. bakuensis, 0.6%. There were no differences regarding the more frequent Eimeria species among the different ages of animals or between the different farming management systems. Based on these data, E. crandallis was the most prevalent, followed by E. parva and E. faurei species, regardless of the age. Higher parasitism was diagnosed in the young animals that were raised in a confinement regime, and the disease found in the herd was classified as subclinical. Further studies should be conducted in this herd, to verify if the eimeriosis subclinical can cause damage especially in young animals with a high level of infection.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The aim of this paper is to propose a classification of reverse logistics systems based on activities for value recovery from returned products. Case studies were carried out in three Brazilian companies. Research results show that Company 1 uses a reverse logistics system based on ‘disposal logistics system’, the main reason for returns is ‘end of life’ and the main motivation is ‘legislation’; Company 2 uses ‘Recycling logistics system’, the main reason for the returns is ‘products not sold’ and the main motivation is ‘recovery of assets and value’; finally, Company 3 uses ‘product reprocessing logistics system’, the main reason for returns is ‘end of life’ and the main motivation is ‘social and environmental responsibility’.
Resumo:
Over the past several decades, the topic of child development in a cultural context has received a great deal of theoretical and empirical investigation. Investigators from the fields of indigenous and cultural psychology have argued that childhood is socially and historically constructed, rather than a universal process with a standard sequence of developmental stages or descriptions. As a result, many psychologists have become doubtful that any stage theory of cognitive or socialemotional development can be found to be valid for all times and places. In placing more theoretical emphasis on contextual processes, they define culture as a complex system of common symbolic action patterns (or scripts) built up through everyday human social interaction by means of which individuals create common meanings and in terms of which they organize experience. Researchers understand culture to be organized and coherent, but not homogenous or static, and realize that the complex dynamic system of culture constantly undergoes transformation as participants (adults and children) negotiate and re-negotiate meanings through social interaction. These negotiations and transactions give rise to unceasing heterogeneity and variability in how different individuals and groups of individuals interpret values and meanings. However, while many psychologists—both inside and outside the fields of indigenous and cultural psychology–are now willing to give up the idea of a universal path of child development and a universal story of parenting, they have not necessarily foreclosed on the possibility of discovering and describing some universal processes that underlie socialization and development-in-context. The roots of such universalities would lie in the biological aspects of child development, in the evolutionary processes of adaptation, and in the unique symbolic and problem-solving capacities of the human organism as a culture-bearing species. For instance, according to functionalist psychological anthropologists, shared (cultural) processes surround the developing child and promote in the long view the survival of families and groups if they are to demonstrate continuity in the face of ecological change and resource competition, (e.g. Edwards & Whiting, 2004; Gallimore, Goldenberg, & Weisner, 1993; LeVine, Dixon, LeVine, Richman, Leiderman, Keefer, & Brazelton, 1994; LeVine, Miller, & West, 1988; Weisner, 1996, 2002; Whiting & Edwards, 1988; Whiting & Whiting, 1980). As LeVine and colleagues (1994) state: A population tends to share an environment, symbol systems for encoding it, and organizations and codes of conduct for adapting to it (emphasis added). It is through the enactment of these population-specific codes of conduct in locally organized practices that human adaptation occurs. Human adaptation, in other words, is largely attributable to the operation of specific social organizations (e.g. families, communities, empires) following culturally prescribed scripts (normative models) in subsistence, reproduction, and other domains [communication and social regulation]. (p. 12) It follows, then, that in seeking to understand child development in a cultural context, psychologists need to support collaborative and interdisciplinary developmental science that crosses international borders. Such research can advance cross-cultural psychology, cultural psychology, and indigenous psychology, understood as three sub-disciplines composed of scientists who frequently communicate and debate with one another and mutually inform one another’s research programs. For example, to turn to parental belief systems, the particular topic of this chapter, it is clear that collaborative international studies are needed to support the goal of crosscultural psychologists for findings that go beyond simply describing cultural differences in parental beliefs. Comparative researchers need to shed light on whether parental beliefs are (or are not) systematically related to differences in child outcomes; and they need meta-analyses and reviews to explore between- and within-culture variations in parental beliefs, with a focus on issues of social change (Saraswathi, 2000). Likewise, collaborative research programs can foster the goals of indigenous psychology and cultural psychology and lay out valid descriptions of individual development in their particular cultural contexts and the processes, principles, and critical concepts needed for defining, analyzing, and predicting outcomes of child development-in-context. The project described in this chapter is based on an approach that integrates elements of comparative methodology to serve the aim of describing particular scenarios of child development in unique contexts. The research team of cultural insiders and outsiders allows for a look at American belief systems based on a dialogue of multiple perspectives.
Resumo:
The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.