842 resultados para performance-based engineering
Resumo:
Stemming from in vitro and in vivo pre-clinical and human models, tissue-engineering-based strategies continue to demonstrate great potential for the regeneration of the pulp-dentin complex, particularly in necrotic, immature permanent teeth. Nanofibrous scaffolds, which closely resemble the native extracellular matrix, have been successfully synthesized by various techniques, including but not limited to electrospinning. A common goal in scaffold synthesis has been the notion of promoting cell guidance through the careful design and use of a collection of biochemical and physical cues capable of governing and stimulating specific events at the cellular and tissue levels. The latest advances in processing technologies allow for the fabrication of scaffolds where selected bioactive molecules can be delivered locally, thus increasing the possibilities for clinical success. Though electrospun scaffolds have not yet been tested in vivo in either human or animal pulpless models in immature permanent teeth, recent studies have highlighted their regenerative potential both from an in vitro and in vivo (i.e., subcutaneous model) standpoint. Possible applications for these bioactive scaffolds continue to evolve, with significant prospects related to the regeneration of both dentin and pulp tissue and, more recently, to root canal disinfection. Nonetheless, no single implantable scaffold can consistently guide the coordinated growth and development of the multiple tissue types involved in the functional regeneration of the pulp-dentin complex. The purpose of this review is to provide a comprehensive perspective on the latest discoveries related to the use of scaffolds and/or stem cells in regenerative endodontics. The authors focused this review on bioactive nanofibrous scaffolds, injectable scaffolds and stem cells, and pre-clinical findings using stem-cell-based strategies. These topics are discussed in detail in an attempt to provide future direction and to shed light on their potential translation to clinical settings.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The current concern with the environment promotes the development of new technologies for production with use of alternative materials, from renewable resources, and changes in production processes, having as main objective the reduction of environmental impact. One of the alternatives for cleaner production is the use of castor oil derivatives instead of non-renewable sources, such as adhesives based on PVA (polyvinyl acetate), applied in the manufacturing process of glued laminated bamboo. Based on the versatility of the bamboo laminate and the castor oil, and from the perspective of sustainability, this study aims to contribute to the application of new materials and processes, used in the manufacturing industry, by proposing the use of the polyurethane adhesive based on castor oil for glued laminated bamboo manufacturing, which can later be used in the manufacture of several products. To verify the applicability of the polyurethane adhesive based on castor oil in the glued laminated bamboo manufacture, mechanical tests of traction and shearing of the glue sheet were performed in specimens of the said material, and the results were compared with the Cascorez 2590 and Waterbond adhesives. The results showed that the polyurethane adhesive based on castor oil, in the traction test, has superior performance than the Waterbond adhesive and slightly below than the Cascorez 2590 adhesive, but in the shear test, the polyurethane adhesive based on castor oil presented a slightly inferior performance than the other two adhesives used in the comparison.
Resumo:
The security of the two party Diffie-Hellman key exchange protocol is currently based on the discrete logarithm problem (DLP). However, it can also be built upon the elliptic curve discrete logarithm problem (ECDLP). Most proposed secure group communication schemes employ the DLP-based Diffie-Hellman protocol. This paper proposes the ECDLP-based Diffie-Hellman protocols for secure group communication and evaluates their performance on wireless ad hoc networks. The proposed schemes are compared at the same security level with DLP-based group protocols under different channel conditions. Our experiments and analysis show that the Tree-based Group Elliptic Curve Diffie-Hellman (TGECDH) protocol is the best in overall performance for secure group communication among the four schemes discussed in the paper. Low communication overhead, relatively low computation load and short packets are the main reasons for the good performance of the TGECDH protocol.
Resumo:
One problem with using component-based software development approach is that once software modules are reused over generations of products, they form legacy structures that can be challenging to understand, making validating these systems difficult. Therefore, tools and methodologies that enable engineers to see interactions of these software modules will enhance their ability to make these software systems more dependable. To address this need, we propose SimSight, a framework to capture dynamic call graphs in Simics, a widely adopted commercial full-system simulator. Simics is a software system that simulates complete computer systems. Thus, it performs nearly identical tasks to a real system but at a much lower speed while providing greater execution observability. We have implemented SimSight to generate dynamic call graphs of statically and dynamically linked functions in x86/Linux environment. A case study illustrates how we can use SimSight to identify sources of software errors. We then evaluate its performance using 12 integer programs from SPEC CPU2006 benchmark suite.
Resumo:
Observability measures the support of computer systems to accurately capture, analyze, and present (collectively observe) the internal information about the systems. Observability frameworks play important roles for program understanding, troubleshooting, performance diagnosis, and optimizations. However, traditional solutions are either expensive or coarse-grained, consequently compromising their utility in accommodating today’s increasingly complex software systems. New solutions are emerging for VM-based languages due to the full control language VMs have over program executions. Existing such solutions, nonetheless, still lack flexibility, have high overhead, or provide limited context information for developing powerful dynamic analyses. In this thesis, we present a VM-based infrastructure, called marker tracing framework (MTF), to address the deficiencies in the existing solutions for providing better observability for VM-based languages. MTF serves as a solid foundation for implementing fine-grained low-overhead program instrumentation. Specifically, MTF allows analysis clients to: 1) define custom events with rich semantics ; 2) specify precisely the program locations where the events should trigger; and 3) adaptively enable/disable the instrumentation at runtime. In addition, MTF-based analysis clients are more powerful by having access to all information available to the VM. To demonstrate the utility and effectiveness of MTF, we present two analysis clients: 1) dynamic typestate analysis with adaptive online program analysis (AOPA); and 2) selective probabilistic calling context analysis (SPCC). In addition, we evaluate the runtime performance of MTF and the typestate client with the DaCapo benchmarks. The results show that: 1) MTF has acceptable runtime overhead when tracing moderate numbers of marker events; and 2) AOPA is highly effective in reducing the event frequency for the dynamic typestate analysis; and 3) language VMs can be exploited to offer greater observability.
Resumo:
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Resumo:
Lipid peroxidation (LPO) has been associated with periodontal disease, and the evaluation of malondialdehyde (MDA) in the gingival crevicular fluid (GCF), an inflammatory exudate from the surrounding tissue of the periodontium, may be useful to clarify the role of LPO in the pathogenesis of periodontal disease. We describe the validation of a method to measure MDA in the GCF using high-performance liquid chromatography. MDA calibration curves were prepared with phosphate-buffered solution spiked with increasing known concentrations of MDA. Healthy and diseased GCF samples were collected from the same patient to avoid interindividual variability. MDA response was linear in the range measured, and excellent agreement was observed between added and detected concentrations of MDA. Samples' intra- and interday coefficients of variation were below 6.3% and 12.4%, respectively. The limit of quantitation (signal/noise = 5) was 0.03 mu M. When the validated method was applied to the GCF, excellent agreement was observed in the MDA quantitation from healthy and diseased sites, and diseased sites presented more MDA than healthy sites (P < 0.05). In this study, a validated method for MDA quantitation in GCF was established with satisfactory sensitivity, precision, and accuracy. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
A power transformer needs continuous monitoring and fast protection as it is a very expensive piece of equipment and an essential element in an electrical power system. The most common protection technique used is the percentage differential logic, which provides discrimination between an internal fault and different operating conditions. Unfortunately, there are some operating conditions of power transformers that can mislead the conventional protection affecting the power system stability negatively. This study proposes the development of a new algorithm to improve the protection performance by using fuzzy logic, artificial neural networks and genetic algorithms. An electrical power system was modelled using Alternative Transients Program software to obtain the operational conditions and fault situations needed to test the algorithm developed, as well as a commercial differential relay. Results show improved reliability, as well as a fast response of the proposed technique when compared with conventional ones.
Resumo:
The presence of cognitive impairment is a frequent complaint among elderly individuals in the general population. This study aimed to investigate the relationship between aging-related regional gray matter (rGM) volume changes and cognitive performance in healthy elderly adults. Morphometric magnetic resonance imaging (MRI) measures were acquired in a community-based sample of 170 cognitively-preserved subjects (66 to 75 years). This sample was drawn from the "Sao Paulo Ageing and Health" study, an epidemiological study aimed at investigating the prevalence and risk factors for Alzheimer's disease in a low income region of the city of Sao Paulo. All subjects underwent cognitive testing using a cross-culturally battery validated by the Research Group on Dementia 10/66 as well as the SKT (applied on the day of MRI scanning). Blood genotyping was performed to determine the frequency of the three apolipoprotein E allele variants (APOE epsilon 2/epsilon 3/epsilon 4) in the sample. Voxelwise linear correlation analyses between rGM volumes and cognitive test scores were performed using voxel-based morphometry, including chronological age as covariate. There were significant direct correlations between worse overall cognitive performance and rGM reductions in the right orbitofrontal cortex and parahippocampal gyrus, and also between verbal fluency scores and bilateral parahippocampal gyral volume (p < 0.05, familywise-error corrected for multiple comparisons using small volume correction). When analyses were repeated adding the presence of the APOE epsilon 4 allele as confounding covariate or excluding a minority of APOE epsilon 2 carriers, all findings retained significance. These results indicate that rGM volumes are relevant biomarkers of cognitive deficits in healthy aging individuals, most notably involving temporolimbic regions and the orbitofrontal cortex.
Resumo:
A study was made to evaluate the effect of a castor oil-based detergent on strawberry crops treated with different classes of pesticides, namely deltamethrin, folpet, tebuconazole, abamectin and mancozeb, in a controlled environment. Experimental crops of greenhouse strawberries were cultivated in five different ways with control groups using pesticides and castor oil-based detergent. The results showed that the group 2, which was treated with castor oil-based detergent, presented the lowest amount of pesticide residues and the highest quality of fruit produced.
Resumo:
In this paper, a novel method for power quality signal decomposition is proposed based on Independent Component Analysis (ICA). This method aims to decompose the power system signal (voltage or current) into components that can provide more specific information about the different disturbances which are occurring simultaneously during a multiple disturbance situation. The ICA is originally a multichannel technique. However, the method proposes its use to blindly separate out disturbances existing in a single measured signal (single channel). Therefore, a preprocessing step for the ICA is proposed using a filter bank. The proposed method was applied to synthetic data, simulated data, as well as actual power system signals, showing a very good performance. A comparison with the decomposition provided by the Discrete Wavelet Transform shows that the proposed method presented better decoupling for the analyzed data. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for (i) discovering the structural commonalities between sub-trees, (ii) identifying sub-tree semantic resemblances, (iii) computing tree-based edit operations costs, and (iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance.