976 resultados para Technical Report, Computer Science
Resumo:
OBJETIVO: caracterizar a inserção de egressos do Curso de Fonoaudiologia da Universidade Estadual Paulista (UNESP) - Marília, em Programas de Pós-Graduação (PPG) Stricto Sensu brasileiros. MÉTODO: foram utilizadas listas de graduados e Curriculum Vitae do egresso e do orientador. RESULTADOS: dos 537 formados, 16,57% cursaram/estavam cursando PPG e destes, 98,88% em mestrado e 37,08% também em doutorado. Na grande área de conhecimento, 50% dos egressos de mestrado vincularam-se predominantemente a programas em Ciências da Saúde, 31,80% em Ciências Humanas e 13,64% em Linguística, Letras e Artes. No doutorado, 33, 33% em Ciências Humanas, 30,30% em Ciências da Saúde e em Linguística, Letras e Artes. Quanto à área de conhecimento, predominou a vinculação, no mestrado, de 30,68% em Fonoaudiologia, 28,41% em Educação, 13,64% em Linguística e 9,09% em Medicina I; e, no doutorado, de 33,33% em Educação, 30,30% em Linguística e 9,09% em Fonoaudiologia; 55,68% dissertações e 51,52% teses focalizaram a linguagem. A UNESP predominou com 39,77% no mestrado e 48,48% no doutorado. Predominou a vinculação a Programas com conceito 4 para 52,27% dos egressos do mestrado e 45,45% do doutorado. Quando constou a informação (55,68%), todos receberam fomento. O Teste de Razão de Verossimilhança não indicou diferenças significativas dos percentuais obtidos entre o mestrado e o doutorado. CONCLUSÃO: os resultados superaram os apresentados para o mesmo Estado, mostraram a característica interdisciplinar da Ciência Fonoaudiológica e o predomínio de temática em linguagem.
Resumo:
OBJECTIVE: To determine the pH over a period of 168 h and the ionic silver content in various concentrations and post-preparation times of aqueous silver nitrate solutions. Also, the possible effects of these factors on microleakage test in adhesive/resin restorations in primary and permanent teeth were evaluated. MATERIAL AND METHODS: A digital pHmeter was used for measuring the pH of the solutions prepared with three types of water (purified, deionized or distilled) and three brands of silver nitrate salt (Merck, Synth or Cennabras) at 0, 1, 2, 24, 48, 72, 96 and 168 h after preparation, and storage in transparent or dark bottles. Ionic silver was assayed according to the post-preparation times (2, 24, 48, 72, 96, 168 h) and concentrations (1, 5, 25, 50%) of solutions by atomic emission spectrometry. For each sample of each condition, three readings were obtained for calculating the mean value. Class V cavities were prepared with enamel margins on primary and permanent teeth and restored with the adhesive systems OptiBond FL or OptiBond SOLO Plus SE and the composite resin Filtek Z-250. After nail polish coverage, the permanent teeth were immersed in 25% or 50% AgNO3 solution and the primary teeth in 5% or 50% AgNO3 solutions for microleakage evaluation. ANOVA and the Tukey's test were used for data analyses (α=5%). RESULTS: The mean pH of the solutions ranged from neutral to alkaline (7.9±2.2 to 11.8±0.9). Mean ionic silver content differed depending on the concentration of the solution (4.75±0.5 to 293±15.3 ppm). In the microleakage test, significant difference was only observed for the adhesive system factor (p=0.000). CONCLUSIONS: Under the tested experimental conditions and based on the obtained results, it may be concluded that the aqueous AgNO3 solutions: have neutral/alkaline pH and service life of up to 168 h; the level of ionic silver is proportional to the concentration of the solution; even at 5% concentration, the solutions were capable of indicating loss of marginal seal in the composite restorations; the 3-step conventional adhesive system had better performance regarding microleakage in enamel on primary and permanent teeth.
Resumo:
This paper presents the proposal for a reference model for developing software aimed at small companies. Despite the importance of that represent the small software companies in Latin America, the fact of not having its own standards, and able to meet their specific, has created serious difficulties in improving their process and also in quality certification. In this sense and as a contribution to better understanding of the subject they propose a reference model and as a means to validate the proposal, presents a report of its application in a small Brazilian company, committed to certification of the quality model MPS.BR.
Resumo:
The TCP/IP architecture was consolidated as a standard to the distributed systems. However, there are several researches and discussions about alternatives to the evolution of this architecture and, in this study area, this work presents the Title Model to contribute with the application needs support by the cross layer ontology use and the horizontal addressing, in a next generation Internet. For a practical viewpoint, is showed the network cost reduction for the distributed programming example, in networks with layer 2 connectivity. To prove the title model enhancement, it is presented the network analysis performed for the message passing interface, sending a vector of integers and returning its sum. By this analysis, it is confirmed that the current proposal allows, in this environment, a reduction of 15,23% over the total network traffic, in bytes.
Resumo:
Hub-and-spoke networks are widely studied in the area of location theory. They arise in several contexts, including passenger airlines, postal and parcel delivery, and computer and telecommunication networks. Hub location problems usually involve three simultaneous decisions to be made: the optimal number of hub nodes, their locations and the allocation of the non-hub nodes to the hubs. In the uncapacitated single allocation hub location problem (USAHLP) hub nodes have no capacity constraints and non-hub nodes must be assigned to only one hub. In this paper, we propose three variants of a simple and efficient multi-start tabu search heuristic as well as a two-stage integrated tabu search heuristic to solve this problem. With multi-start heuristics, several different initial solutions are constructed and then improved by tabu search, while in the two-stage integrated heuristic tabu search is applied to improve both the locational and allocational part of the problem. Computational experiments using typical benchmark problems (Civil Aeronautics Board (CAB) and Australian Post (AP) data sets) as well as new and modified instances show that our approaches consistently return the optimal or best-known results in very short CPU times, thus allowing the possibility of efficiently solving larger instances of the USAHLP than those found in the literature. We also report the integer optimal solutions for all 80 CAB data set instances and the 12 AP instances up to 100 nodes, as well as for the corresponding new generated AP instances with reduced fixed costs. Published by Elsevier Ltd.
Resumo:
Computer viruses are an important risk to computational systems endangering either corporations of all sizes or personal computers used for domestic applications. Here, classical epidemiological models for disease propagation are adapted to computer networks and, by using simple systems identification techniques a model called SAIC (Susceptible, Antidotal, Infectious, Contaminated) is developed. Real data about computer viruses are used to validate the model. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
We have used various computational methodologies including molecular dynamics, density functional theory, virtual screening, ADMET predictions and molecular interaction field studies to design and analyze four novel potential inhibitors of farnesyltransferase (FTase). Evaluation of two proposals regarding their drug potential as well as lead compounds have indicated them as novel promising FTase inhibitors, with theoretically interesting pharmacotherapeutic profiles, when Compared to the very active and most cited FTase inhibitors that have activity data reported, which are launched drugs or compounds in clinical tests. One of our two proposals appears to be a more promising drug candidate and FTase inhibitor, but both derivative molecules indicate potentially very good pharmacotherapeutic profiles in comparison with Tipifarnib and Lonafarnib, two reference pharmaceuticals. Two other proposals have been selected with virtual screening approaches and investigated by LIS, which suggest novel and alternatives scaffolds to design future potential FTase inhibitors. Such compounds can be explored as promising molecules to initiate a research protocol in order to discover novel anticancer drug candidates targeting farnesyltransferase, in the fight against cancer. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
These notes follow on from the material that you studied in CSSE1000 Introduction to Computer Systems. There you studied details of logic gates, binary numbers and instruction set architectures using the Atmel AVR microcontroller family as an example. In your present course (METR2800 Team Project I), you need to get on to designing and building an application which will include such a microcontroller. These notes focus on programming an AVR microcontroller in C and provide a number of example programs to illustrate the use of some of the AVR peripheral devices.
Resumo:
OctVCE is a cartesian cell CFD code produced especially for numerical simulations of shock and blast wave interactions with complex geometries, in particular, from explosions. Virtual Cell Embedding (VCE) was chosen as its cartesian cell kernel for its simplicity and sufficiency for practical engineering design problems. The code uses a finite-volume formulation of the unsteady Euler equations with a second order explicit Runge-Kutta Godonov (MUSCL) scheme. Gradients are calculated using a least-squares method with a minmod limiter. Flux solvers used are AUSM, AUSMDV and EFM. No fluid-structure coupling or chemical reactions are allowed, but gas models can be perfect gas and JWL or JWLB for the explosive products. This report also describes the code’s ‘octree’ mesh adaptive capability and point-inclusion query procedures for the VCE geometry engine. Finally, some space will also be devoted to describing code parallelization using the shared-memory OpenMP paradigm. The user manual to the code is to be found in the companion report 2007/13.
Resumo:
OctVCE is a cartesian cell CFD code produced especially for numerical simulations of shock and blast wave interactions with complex geometries. Virtual Cell Embedding (VCE) was chosen as its cartesian cell kernel as it is simple to code and sufficient for practical engineering design problems. This also makes the code much more ‘user-friendly’ than structured grid approaches as the gridding process is done automatically. The CFD methodology relies on a finite-volume formulation of the unsteady Euler equations and is solved using a standard explicit Godonov (MUSCL) scheme. Both octree-based adaptive mesh refinement and shared-memory parallel processing capability have also been incorporated. For further details on the theory behind the code, see the companion report 2007/12.
Resumo:
Coset enumeration is a most important procedure for investigating finitely presented groups. We present a practical parallel procedure for coset enumeration on shared memory processors. The shared memory architecture is particularly interesting because such parallel computation is both faster and cheaper. The lower cost comes when the program requires large amounts of memory, and additional CPU's. allow us to lower the time that the expensive memory is being used. Rather than report on a suite of test cases, we take a single, typical case, and analyze the performance factors in-depth. The parallelization is achieved through a master-slave architecture. This results in an interesting phenomenon, whereby the CPU time is divided into a sequential and a parallel portion, and the parallel part demonstrates a speedup that is linear in the number of processors. We describe an early version for which only 40% of the program was parallelized, and we describe how this was modified to achieve 90% parallelization while using 15 slave processors and a master. In the latter case, a sequential time of 158 seconds was reduced to 29 seconds using 15 slaves.
Resumo:
Spatial data has now been used extensively in the Web environment, providing online customized maps and supporting map-based applications. The full potential of Web-based spatial applications, however, has yet to be achieved due to performance issues related to the large sizes and high complexity of spatial data. In this paper, we introduce a multiresolution approach to spatial data management and query processing such that the database server can choose spatial data at the right resolution level for different Web applications. One highly desirable property of the proposed approach is that the server-side processing cost and network traffic can be reduced when the level of resolution required by applications are low. Another advantage is that our approach pushes complex multiresolution structures and algorithms into the spatial database engine. That is, the developer of spatial Web applications needs not to be concerned with such complexity. This paper explains the basic idea, technical feasibility and applications of multiresolution spatial databases.
Resumo:
We introduce biomimetic in silico devices, and means for validation along with methods for testing and refining them. The devices are constructed from adaptable software components designed to map logically to biological components at multiple levels of resolution. In this report we focus on the liver; the goal is to validate components that mimic features of the lobule (the hepatic primary functional unit) and dynamic aspects of liver behavior, structure, and function. An assembly of lobule-mimetic devices represents an in silico liver. We validate against outflow profiles for sucrose administered as a bolus to isolated, perfused rat livers. Acceptable in silico profiles are experimentally indistinguishable from those of the in situ referent. This new technology is intended to provide powerful Dew tools for challenging our understanding of how biological functional units function in vivo.
Resumo:
In order to separate the effects of experience from other characteristics of word frequency (e.g., orthographic distinctiveness), computer science and psychology students rated their experience with computer science technical items and nontechnical items from a wide range of word frequencies prior to being tested for recognition memory of the rated items. For nontechnical items, there was a curvilinear relationship between recognition accuracy and word frequency for both groups of students. The usual superiority of low-frequency words was demonstrated and high-frequency words were recognized least well. For technical items, a similar curvilinear relationship was evident for the psychology students, but for the computer science students, recognition accuracy was inversely related to word frequency. The ratings data showed that subjective experience rather than background word frequency was the better predictor of recognition accuracy.
Resumo:
In this work, we take advantage of association rule mining to support two types of medical systems: the Content-based Image Retrieval (CBIR) systems and the Computer-Aided Diagnosis (CAD) systems. For content-based retrieval, association rules are employed to reduce the dimensionality of the feature vectors that represent the images and to improve the precision of the similarity queries. We refer to the association rule-based method to improve CBIR systems proposed here as Feature selection through Association Rules (FAR). To improve CAD systems, we propose the Image Diagnosis Enhancement through Association rules (IDEA) method. Association rules are employed to suggest a second opinion to the radiologist or a preliminary diagnosis of a new image. A second opinion automatically obtained can either accelerate the process of diagnosing or to strengthen a hypothesis, increasing the probability of a prescribed treatment be successful. Two new algorithms are proposed to support the IDEA method: to pre-process low-level features and to propose a preliminary diagnosis based on association rules. We performed several experiments to validate the proposed methods. The results indicate that association rules can be successfully applied to improve CBIR and CAD systems, empowering the arsenal of techniques to support medical image analysis in medical systems. (C) 2009 Elsevier B.V. All rights reserved.