912 resultados para Numerical Algorithms and Problems
Resumo:
n the last decades the biocomposites have been widely used in the construction, automobile and aerospace industries. Not only the interface transition zone (ITZ) but also the heterogeneity of natural fibres affects the mechanical behaviour of these composites. This work focuses on the numerical and experimental analyses of a polymeric composite fabricated with epoxy resin and unidirectional sisal and banana fibres. A three-dimensional model was set to analyze the composites using the elastic properties of the individual phases. In addition, a two-dimensional model was set taking into account the effective composite properties obtained by micromechanical models. A tensile testing was performed to validate the numerical analyses and evaluating the interface condition of the constitutive phases.
Resumo:
Firefly Algorithm is a recent swarm intelligence method, inspired by the social behavior of fireflies, based on their flashing and attraction characteristics [1, 2]. In this paper, we analyze the implementation of a dynamic penalty approach combined with the Firefly algorithm for solving constrained global optimization problems. In order to assess the applicability and performance of the proposed method, some benchmark problems from engineering design optimization are considered.
Resumo:
"Series: Solid mechanics and its applications, vol. 226"
Resumo:
"Series: Solid mechanics and its applications, vol. 226"
Resumo:
We review several results concerning the long time asymptotics of nonlinear diffusion models based on entropy and mass transport methods. Semidiscretization of these nonlinear diffusion models are proposed and their numerical properties analysed. We demonstrate the long time asymptotic results by numerical simulation and we discuss several open problems based on these numerical results. We show that for general nonlinear diffusion equations the long-time asymptotics can be characterized in terms of fixed points of certain maps which are contractions for the euclidean Wasserstein distance. In fact, we propose a new scaling for which we can prove that this family of fixed points converges to the Barenblatt solution for perturbations of homogeneous nonlinearities for values close to zero.
Resumo:
Prevention programs in adolescence are particularly effective if they target homogeneous risk groups of adolescents who share a combination of particular needs and problems. The present work aims to identify and classify risky single-occasion drinking (RSOD) adolescents according to their motivation to engage in drinking. An easy-to-use coding procedure was developed. It was validated by means of cluster analyses and structural equation modeling based on two randomly selected subsamples of a nationally representative sample of 2,449 12- to 18-year-old RSOD students in Switzerland. Results revealed that the coding procedure classified RSOD adolescents as either enhancement drinkers or coping drinkers. The high concordance (Sample A: kappa - .88, Sample B: kappa - .90) with the results of the cluster analyses demonstrated the convergent validity of the coding classification. The fact that enhancement drinkers in both subsamples were found to go out more frequently in the evenings and to have more satisfactory social relationships, as well as a higher proportion of drinking peers and a lower likelihood to drink at home than coping drinkers demonstrates the concurrent validity of the classification. To conclude, the coding procedure appears to be a valid, reliable, and easy-to-use tool that can help better adapt prevention activities to adolescent risky drinking motives.
Resumo:
ABSTRACT: BACKGROUND: Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have shown that a patient's antibody reaction in a confirmatory line immunoassay (INNO-LIATM HIV I/II Score, Innogenetics) provides information on the duration of infection. Here, we sought to further investigate the diagnostic specificity of various Inno-Lia algorithms and to identify factors affecting it. METHODS: Plasma samples of 714 selected patients of the Swiss HIV Cohort Study infected for longer than 12 months and representing all viral clades and stages of chronic HIV-1 infection were tested blindly by Inno-Lia and classified as either incident (up to 12 m) or older infection by 24 different algorithms. Of the total, 524 patients received HAART, 308 had HIV-1 RNA below 50 copies/mL, and 620 were infected by a HIV-1 non-B clade. Using logistic regression analysis we evaluated factors that might affect the specificity of these algorithms. RESULTS: HIV-1 RNA <50 copies/mL was associated with significantly lower reactivity to all five HIV-1 antigens of the Inno-Lia and impaired specificity of most algorithms. Among 412 patients either untreated or with HIV-1 RNA ≥50 copies/mL despite HAART, the median specificity of the algorithms was 96.5% (range 92.0-100%). The only factor that significantly promoted false-incident results in this group was age, with false-incident results increasing by a few percent per additional year. HIV-1 clade, HIV-1 RNA, CD4 percentage, sex, disease stage, and testing modalities exhibited no significance. Results were similar among 190 untreated patients. CONCLUSIONS: The specificity of most Inno-Lia algorithms was high and not affected by HIV-1 variability, advanced disease and other factors promoting false-recent results in other STARHS. Specificity should be good in any group of untreated HIV-1 patients.
Resumo:
The coverage and volume of geo-referenced datasets are extensive and incessantly¦growing. The systematic capture of geo-referenced information generates large volumes¦of spatio-temporal data to be analyzed. Clustering and visualization play a key¦role in the exploratory data analysis and the extraction of knowledge embedded in¦these data. However, new challenges in visualization and clustering are posed when¦dealing with the special characteristics of this data. For instance, its complex structures,¦large quantity of samples, variables involved in a temporal context, high dimensionality¦and large variability in cluster shapes.¦The central aim of my thesis is to propose new algorithms and methodologies for¦clustering and visualization, in order to assist the knowledge extraction from spatiotemporal¦geo-referenced data, thus improving making decision processes.¦I present two original algorithms, one for clustering: the Fuzzy Growing Hierarchical¦Self-Organizing Networks (FGHSON), and the second for exploratory visual data analysis:¦the Tree-structured Self-organizing Maps Component Planes. In addition, I present¦methodologies that combined with FGHSON and the Tree-structured SOM Component¦Planes allow the integration of space and time seamlessly and simultaneously in¦order to extract knowledge embedded in a temporal context.¦The originality of the FGHSON lies in its capability to reflect the underlying structure¦of a dataset in a hierarchical fuzzy way. A hierarchical fuzzy representation of¦clusters is crucial when data include complex structures with large variability of cluster¦shapes, variances, densities and number of clusters. The most important characteristics¦of the FGHSON include: (1) It does not require an a-priori setup of the number¦of clusters. (2) The algorithm executes several self-organizing processes in parallel.¦Hence, when dealing with large datasets the processes can be distributed reducing the¦computational cost. (3) Only three parameters are necessary to set up the algorithm.¦In the case of the Tree-structured SOM Component Planes, the novelty of this algorithm¦lies in its ability to create a structure that allows the visual exploratory data analysis¦of large high-dimensional datasets. This algorithm creates a hierarchical structure¦of Self-Organizing Map Component Planes, arranging similar variables' projections in¦the same branches of the tree. Hence, similarities on variables' behavior can be easily¦detected (e.g. local correlations, maximal and minimal values and outliers).¦Both FGHSON and the Tree-structured SOM Component Planes were applied in¦several agroecological problems proving to be very efficient in the exploratory analysis¦and clustering of spatio-temporal datasets.¦In this thesis I also tested three soft competitive learning algorithms. Two of them¦well-known non supervised soft competitive algorithms, namely the Self-Organizing¦Maps (SOMs) and the Growing Hierarchical Self-Organizing Maps (GHSOMs); and the¦third was our original contribution, the FGHSON. Although the algorithms presented¦here have been used in several areas, to my knowledge there is not any work applying¦and comparing the performance of those techniques when dealing with spatiotemporal¦geospatial data, as it is presented in this thesis.¦I propose original methodologies to explore spatio-temporal geo-referenced datasets¦through time. Our approach uses time windows to capture temporal similarities and¦variations by using the FGHSON clustering algorithm. The developed methodologies¦are used in two case studies. In the first, the objective was to find similar agroecozones¦through time and in the second one it was to find similar environmental patterns¦shifted in time.¦Several results presented in this thesis have led to new contributions to agroecological¦knowledge, for instance, in sugar cane, and blackberry production.¦Finally, in the framework of this thesis we developed several software tools: (1)¦a Matlab toolbox that implements the FGHSON algorithm, and (2) a program called¦BIS (Bio-inspired Identification of Similar agroecozones) an interactive graphical user¦interface tool which integrates the FGHSON algorithm with Google Earth in order to¦show zones with similar agroecological characteristics.
Resumo:
Experimental observations of self-organized behavior arising out of noise are also described, and details on the numerical algorithms needed in the computer simulation of these problems are given.
Resumo:
RATIONALE: Many sources of conflict exist in intensive care units (ICUs). Few studies recorded the prevalence, characteristics, and risk factors for conflicts in ICUs. OBJECTIVES: To record the prevalence, characteristics, and risk factors for conflicts in ICUs. METHODS: One-day cross-sectional survey of ICU clinicians. Data on perceived conflicts in the week before the survey day were obtained from 7,498 ICU staff members (323 ICUs in 24 countries). MEASUREMENTS AND MAIN RESULTS: Conflicts were perceived by 5,268 (71.6%) respondents. Nurse-physician conflicts were the most common (32.6%), followed by conflicts among nurses (27.3%) and staff-relative conflicts (26.6%). The most common conflict-causing behaviors were personal animosity, mistrust, and communication gaps. During end-of-life care, the main sources of perceived conflict were lack of psychological support, absence of staff meetings, and problems with the decision-making process. Conflicts perceived as severe were reported by 3,974 (53%) respondents. Job strain was significantly associated with perceiving conflicts and with greater severity of perceived conflicts. Multivariate analysis identified 15 factors associated with perceived conflicts, of which 6 were potential targets for future intervention: staff working more than 40 h/wk, more than 15 ICU beds, caring for dying patients or providing pre- and postmortem care within the last week, symptom control not ensured jointly by physicians and nurses, and no routine unit-level meetings. CONCLUSIONS: Over 70% of ICU workers reported perceived conflicts, which were often considered severe and were significantly associated with job strain. Workload, inadequate communication, and end-of-life care emerged as important potential targets for improvement.
Resumo:
Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.
Resumo:
Experimental observations of self-organized behavior arising out of noise are also described, and details on the numerical algorithms needed in the computer simulation of these problems are given.
Resumo:
Diplomityön tavoitteena oli tarkastella numeerisen virtauslaskennan avulla virtaukseen liittyviä ilmiöitä ja kaasun dispersiota. Diplomityön sisältö on jaettu viiteen osaan; johdantoon, teoriaan, katsaukseen virtauksen mallinnukseen huokoisessa materiaalissa liittyviin tutkimusselvityksiin, numeeriseen mallinnukseen sekä tulosten esittämiseen ja johtopäätöksiin. Diplomityön alussa kiinnitettiin huomiota erilaisiin kokeellisiin, numeerisiin ja teoreettisiin mallinnusmenetelmiin, joilla voidaan mallintaa virtausta huokoisessa materiaalissa. Kirjallisuusosassa tehtiin katsaus aikaisemmin julkaistuihin puoliempiirisiin ja empiirisiin tutkimusselvityksiin, jotka liittyvät huokoisen materiaalin aiheuttamaan painehäviöön. Numeerisessa virtauslaskenta osassa rakennettiin ja esitettiin huokoista materiaalia kuvaavat numeeriset mallit käyttäen kaupallista FLUENT -ohjelmistoa. Työn lopussa arvioitiin teorian, numeerisen virtauslaskennan ja kokeellisten tutkimusselvitysten tuloksia. Kolmiulotteisen huokoisen materiaalinnumeerisessa mallinnuksesta saadut tulokset vaikuttivat lupaavilta. Näiden tulosten perusteella tehtiin suosituksia ajatellen tulevaa virtauksen mallinnusta huokoisessa materiaalissa. Osa tässä diplomityössä esitetyistä tuloksista tullaan esittämään 55. Kanadan Kemiantekniikan konferenssissa Torontossa 1619 Lokakuussa 2005. ASME :n kansainvälisessä tekniikan alan julkaisussa. Työ on hyväksytty esitettäväksi esitettäväksi laskennallisen virtausmekaniikan (CFD) aihealueessa 'Peruskäsitteet'. Lisäksi työn yksityiskohtaiset tulokset tullaan lähettämään myös CES:n julkaisuun.
Resumo:
In this thesis, the magnetic field control of convection instabilities and heat and mass transfer processesin magnetic fluids have been investigated by numerical simulations and theoretical considerations. Simulation models based on finite element and finite volume methods have been developed. In addition to standard conservation equations, themagnetic field inside the simulation domain is calculated from Maxwell equations and the necessary terms to take into account for the magnetic body force and magnetic dissipation have been added to the equations governing the fluid motion.Numerical simulations of magnetic fluid convection near the threshold supportedexperimental observations qualitatively. Near the onset of convection the competitive action of thermal and concentration density gradients leads to mostly spatiotemporally chaotic convection with oscillatory and travelling wave regimes, previously observed in binary mixtures and nematic liquid crystals. In many applications of magnetic fluids, the heat and mass transfer processes including the effects of external magnetic fields are of great importance. In addition to magnetic fluids, the concepts and the simulation models used in this study may be applied also to the studies of convective instabilities in ordinary fluids as well as in other binary mixtures and complex fluids.
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.