895 resultados para Teaching of mathematics. Combinatorial analysis. Heuristic analysis of combinatorial problems
Resumo:
Artificial neural networks are dynamic systems consisting of highly interconnected and parallel nonlinear processing elements. Systems based on artificial neural networks have high computational rates due to the use of a massive number of these computational elements. Neural networks with feedback connections provide a computing model capable of solving a rich class of optimization problems. In this paper, a modified Hopfield network is developed for solving problems related to operations research. The internal parameters of the network are obtained using the valid-subspace technique. Simulated examples are presented as an illustration of the proposed approach. Copyright (C) 2000 IFAC.
Resumo:
Williams syndrome (WS) is a neurodevelopmental genetic disorder, often referred as being characterized by dissociation between verbal and non-verbal abilities, although the number of studies disputing this proposal is emerging. Indeed, although they have been traditionally reported as displaying increased speech fluency, this topic has not been fully addressed in research. In previous studies carried out with a small group of individuals with WS, we reported speech breakdowns during conversational and autobiographical narratives suggestive of language difficulties. In the current study, we characterized the speech fluency profile using an ecologically based measure - a narrative task (story generation) was collected from a group of individuals with WS (n = 30) and typically developing group (n = 39) matched in mental age. Oral narratives were elicited using a picture stimulus - the cookie theft picture from Boston Diagnosis Aphasia Test. All narratives were analyzed according to typology and frequency of fluency breakdowns (non-stuttered and stuttered disfluencies). Oral narratives in WS group differed from typically developing group, mainly due to a significant increase in the frequency of disfluencies, particularly in terms of hesitations, repetitions and pauses. This is the first evidence of disfluencies in WS using an ecologically based task (oral narrative task), suggesting that these speech disfluencies may represent a significant marker of language problems in WS. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Humans, as well as some animals are born gifted with the ability to perceive quantities. The needs that came from the evolution of societies and technological resources make the the optimization of such counting methods necessary. Although necessary and useful, there are a lot of diculties in the teaching of such methods.In order to broaden the range of available tools to teach Combinatorial Analysis, a owchart is presented in this work with the goal of helping the students to x the initial concepts of such subject via pratical exercises
Resumo:
The family Loricariidae with about 690 species divided into six subfamilies, is one of the world's largest fish families. Recent studies have shown the existence of several problems in the definition of natural groups in the family, which has made the characterization of the subfamilies and even of some genera quite difficult. With the main objective of contributing for a better understanding of the relationships between loricariids, cytogenetic analysis were conducted with two species of Neoplecostominae and nine species of Hypostominae that, according to morphological and molecular data, may belong to a new monophyletic unit. The results obtained showed a marked chromosomal conservation with the presence of 2n = 54 chromosomes and single interstitial Ag-NORs in all species analyzed. Considering that Neoplecostominae is the primitive sister-group of all other loricariids, with exception of Lithogeneinae, this karyotypic structure may represent the primitive condition for the family Loricariidae. The cytogenetic characteristics partaken by the species of Neoplecostominae and Hypostominae analyzed in the present study reinforce the hypothesis that the species of both these subfamilies might belong to a natural group.
Biometric analysis of the maxillary permanent molar teeth and its relation to furcation involvement.
Resumo:
A high rate of root exposure and consequently the exposure of the furcation area is usually observed in multirooted teeth. In maxillary molar teeth, this fact may endanger the three existent furcations (buccal, mesial and distal), causing serious problems. In this research, distance measures from the buccal furcation to the mesial (F1M) and distal (F1D) surfaces of the mesio-buccal and disto-buccal roots; from the mesial furcation to the buccal (F2B) and palatal (F2P) surfaces of the mesio-buccal and palatal roots and from the distal furcation to the buccal (F3B) and palatal (F3P) surfaces of the disto-buccal and palatal roots, respectively were established. One hundred maxillary first molar teeth were used, 50 of the right and 50 of the left side. Reference marks and demarcations were determined on the furcations and also on the root surfaces involved in the measures. We concluded that these measurements are important because they may effectivelly contribute to diagnosis, prevention and treatment of periodontal problems.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Purpose - The purpose of this paper is to present a method to analyze the noise in aircraft cabins through the VHF Aeronautical Communication Channel, aimed at examining an environment that has the possibility of communication problems between the aircraft crew and the professionals responsible for the controls on land. Design/methodology/approach - This analysis includes equipment normally used for identification and comparison of electromagnetic noise, the cabin and the environment that are present in an airport, as well as equipment for frequency analysis and intensity of those signals. The analysis is done in a reverse way, eliminating situations that are not common in the examined environment, until the identification of the situation with the irregularity. Findings - According to the results, the implementation of the Fourier transform for noise analysis in the cabin was efficient. These results demonstrate that through this transformation, the noise sources can be identified in the environments in cases where there is much spectrum pollution. Research limitations/implications - This kind of noise analysis is important, considering the importance of having good accuracy in airport environment analysis. Originality/value - The paper presents the main trends in the future of aviation communications, and describes the new applications that aim to minimize problems with the current VHF channel.
Resumo:
This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.
Resumo:
In the past two decades the work of a growing portion of researchers in robotics focused on a particular group of machines, belonging to the family of parallel manipulators: the cable robots. Although these robots share several theoretical elements with the better known parallel robots, they still present completely (or partly) unsolved issues. In particular, the study of their kinematic, already a difficult subject for conventional parallel manipulators, is further complicated by the non-linear nature of cables, which can exert only efforts of pure traction. The work presented in this thesis therefore focuses on the study of the kinematics of these robots and on the development of numerical techniques able to address some of the problems related to it. Most of the work is focused on the development of an interval-analysis based procedure for the solution of the direct geometric problem of a generic cable manipulator. This technique, as well as allowing for a rapid solution of the problem, also guarantees the results obtained against rounding and elimination errors and can take into account any uncertainties in the model of the problem. The developed code has been tested with the help of a small manipulator whose realization is described in this dissertation together with the auxiliary work done during its design and simulation phases.
Resumo:
Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.
Resumo:
In the last years the number of shoulder arthroplasties has been increasing. Simultaneously the study of their shape, size and strength and the reasons that bring to a possible early explantation have not yet been examined in detail. The research carried out directly on explants is practically nonexistent, this means a poor understanding of the mechanisms leading the patient and so the surgeon, to their removal. The analysis of the mechanisms which are the cause of instability, dislocation, broken, fracture, etc, may lead to a change in the structure or design of the shoulder prostheses and lengthen the life of the implant in situ. The idea was to analyze 22 explants through three methods in order to find roughness, corrosion and surface wear. In the first method, the humeral heads and/or the glenospheres were examined with the interferometer, a machine that through electromagnetic waves gives information about the roughness of the surfaces under examination. The output of the device was a total profile containing both roughness and information on the waves (representing the spatial waves most characteristic on the surface). The most important value is called "roughness average" and brings the average value of the peaks found in the local defects of the surfaces. It was found that 42% of the prostheses had considerable peak values in the area where the damage was caused by the implant and not only by external events, such as possibly the surgeon's hand. One of the problems of interest in the use of metallic biomaterials is their resistance to corrosion. The clinical significance of the degradation of metal implants has been the purpose of the second method; the interaction between human body and metal components is critical to understand how and why they arrive to corrosion. The percentage of damage in the joints of the prosthetic components has been calculated via high resolution photos and the software ImageJ. The 40% and 50% of the area appeared to have scratches or multiple lines due to mechanical artifacts. The third method of analysis has been made through the use of electron microscopy to quantify the wear surface in polyethylene components. Different joint movements correspond to different mechanisms of damage, which were imprinted in the parts of polyethylene examined. The most affected area was located mainly in the side edges. The results could help the manufacturers to modify the design of the prostheses and thus reduce the number of explants. It could also help surgeons in choosing the model of the prosthesis to be implanted in the patient.
Resumo:
BACKGROUND: Solitary skin nodules composed of pleomorphic T lymphocytes are often the source of diagnostic problems. OBJECTIVE: To characterize the clinicopathological features, prognosis and optimal treatment modalities of patients with solitary lymphoid nodules of small- to medium-sized pleomorphic T lymphocytes. METHODS: Twenty-six patients were analysed for clinical, histopathological, immunophenotypical, molecular and follow-up data. Results: Lesions were located mainly on the head and neck (n = 16; 61.5%) or trunk (n = 8; 30.8%). Histopathology showed non-epidermotropic nodular or diffuse infiltrates of small- to medium-sized pleomorphic T lymphocytes. Monoclonality was found by PCR in 54.2% of cases (n = 13/24). After a mean follow-up of 79.7 months, a local recurrence could be observed only in 1 patient. CONCLUSIONS: Our patients have a specific cutaneous lymphoproliferative disorder characterized by reproducible clinicopathological features. The incongruity between the indolent clinical course and the worrying histopathological features poses difficulties in classifying these cases unambiguously as benign or malignant. We suggest to describe these lesions as 'solitary small- to medium-sized pleomorphic T-cell nodules of undetermined significance'. Irrespective of the name given to these equivocal cutaneous lymphoid proliferations, follow-up data support a non-aggressive therapeutic strategy.
Resumo:
Diabetes mellitus occurs in two forms, insulin-dependent (IDDM, formerly called juvenile type) and non-insulin dependent (NIDDM, formerly called adult type). Prevalence figures from around the world for NIDDM, show that all societies and all races are affected; although uncommon in some populations (.4%), it is common (10%) or very common (40%) in others (Tables 1 and 2).^ In Mexican-Americans in particular, the prevalence rates (7-10%) are intermediate to those in Caucasians (1-2%) and Amerindians (35%). Information about the distribution of the disease and identification of high risk groups for developing glucose intolerance or its vascular manifestations by the study of genetic markers will help to clarify and solve some of the problems from the public health and the genetic point of view.^ This research was designed to examine two general areas in relation to NIDDM. The first aims to determine the prevalence of polymorphic genetic markers in two groups distinguished by the presence or absence of diabetes and to observe if there are any genetic marker-disease association (univariate analysis using two by two tables and logistic regression to study the individual and joint effects of the different variables). The second deals with the effect of genetic differences on the variation in fasting plasma glucose and percent glycosylated hemoglobin (HbAl) (analysis of Covariance for each marker, using age and sex as covariates).^ The results from the first analysis were not statistically significant at the corrected p value of 0.003 given the number of tests that were performed. From the analysis of covariance of all the markers studied, only Duffy and Phosphoglucomutase were statistically significant but poor predictors, given that the amount they explain in terms of variation in glycosylated hemoglobin is very small.^ Trying to determine the polygenic component of chronic disease is not an easy task. This study confirms the fact that a larger and random or representative sample is needed to be able to detect differences in the prevalence of a marker for association studies and in the genetic contribution to the variation in glucose and glycosylated hemoglobin. The importance that ethnic homogeneity in the groups studied and standardization in the methodology will have on the results has been stressed. ^
Resumo:
In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.
Resumo:
Precise modeling of the program heap is fundamental for understanding the behavior of a program, and is thus of signiflcant interest for many optimization applications. One of the fundamental properties of the heap that can be used in a range of optimization techniques is the sharing relationships between the elements in an array or collection. If an analysis can determine that the memory locations pointed to by different entries of an array (or collection) are disjoint, then in many cases loops that traverse the array can be vectorized or transformed into a thread-parallel versión. This paper introduces several novel sharing properties over the concrete heap and corresponding abstractions to represent them. In conjunction with an existing shape analysis technique, these abstractions allow us to precisely resolve the sharing relations in a wide range of heap structures (arrays, collections, recursive data structures, composite heap structures) in a computationally efflcient manner. The effectiveness of the approach is evaluated on a set of challenge problems from the JOlden and SPECjvm98 suites. Sharing information obtained from the analysis is used to achieve substantial thread-level parallel speedups.