967 resultados para Complex problems
Resumo:
BACKGROUND Limitations in the primary studies constitute one important factor to be considered in the grading of recommendations assessment, development, and evaluation (GRADE) system of rating quality of evidence. However, in the network meta-analysis (NMA), such evaluation poses a special challenge because each network estimate receives different amounts of contributions from various studies via direct as well as indirect routes and because some biases have directions whose repercussion in the network can be complicated. FINDINGS In this report we use the NMA of maintenance pharmacotherapy of bipolar disorder (17 interventions, 33 studies) and demonstrate how to quantitatively evaluate the impact of study limitations using netweight, a STATA command for NMA. For each network estimate, the percentage of contributions from direct comparisons at high, moderate or low risk of bias were quantified, respectively. This method has proven flexible enough to accommodate complex biases with direction, such as the one due to the enrichment design seen in some trials of bipolar maintenance pharmacotherapy. CONCLUSIONS Using netweight, therefore, we can evaluate in a transparent and quantitative manner how study limitations of individual studies in the NMA impact on the quality of evidence of each network estimate, even when such limitations have clear directions.
Resumo:
This paper makes reference to the importance lying, at present, on the professional education of counselors. It goes through the history of the different pre-scientific and scientific stages of counseling, and it points out the main representatives in Latin America . It also shows the current state of the specialization, taking as an antecedent some in-depth, widely encompassing research carried out in the European Union in this area. Coincidences and differences are exposed here in terms of professional profiles, university degrees, institutional dependence, up-dating processes and modes of intervention. The current state of postgraduate studies at university is also touched upon, and the different proposals for further education in Latin America are analyzed as well as their dependence upon an undergraduate degree in Psychology, Educational Sciences or Psycho-pedagogy. Finally, this paper places counseling within the complexity and the need to provide all-encompassing and integrative answers which require a wide and deep enough education so as to answer all these issues and problems.
Resumo:
This paper makes reference to the importance lying, at present, on the professional education of counselors. It goes through the history of the different pre-scientific and scientific stages of counseling, and it points out the main representatives in Latin America . It also shows the current state of the specialization, taking as an antecedent some in-depth, widely encompassing research carried out in the European Union in this area. Coincidences and differences are exposed here in terms of professional profiles, university degrees, institutional dependence, up-dating processes and modes of intervention. The current state of postgraduate studies at university is also touched upon, and the different proposals for further education in Latin America are analyzed as well as their dependence upon an undergraduate degree in Psychology, Educational Sciences or Psycho-pedagogy. Finally, this paper places counseling within the complexity and the need to provide all-encompassing and integrative answers which require a wide and deep enough education so as to answer all these issues and problems.
Resumo:
This paper makes reference to the importance lying, at present, on the professional education of counselors. It goes through the history of the different pre-scientific and scientific stages of counseling, and it points out the main representatives in Latin America . It also shows the current state of the specialization, taking as an antecedent some in-depth, widely encompassing research carried out in the European Union in this area. Coincidences and differences are exposed here in terms of professional profiles, university degrees, institutional dependence, up-dating processes and modes of intervention. The current state of postgraduate studies at university is also touched upon, and the different proposals for further education in Latin America are analyzed as well as their dependence upon an undergraduate degree in Psychology, Educational Sciences or Psycho-pedagogy. Finally, this paper places counseling within the complexity and the need to provide all-encompassing and integrative answers which require a wide and deep enough education so as to answer all these issues and problems.
Resumo:
From the water management perspective, water scarcity is an unacceptable risk of facing water shortages to serve water demands in the near future. Water scarcity may be temporary and related to drought conditions or other accidental situation, or may be permanent and due to deeper causes such as excessive demand growth, lack of infrastructure for water storage or transport, or constraints in water management. Diagnosing the causes of water scarcity in complex water resources systems is a precondition to adopt effective drought risk management actions. In this paper we present four indices which have been developed to evaluate water scarcity. We propose a methodology for interpretation of index values that can lead to conclusions about the reliability and vulnerability of systems to water scarcity, as well as to diagnose their possible causes and to propose solutions. The described methodology was applied to the Ebro river basin, identifying existing and expected problems and possible solutions. System diagnostics, based exclusively on the analysis of index values, were compared with the known reality as perceived by system managers, validating the conclusions in all cases
Resumo:
The analysis of deformation in soils is of paramount importance in geotechnical engineering. For a long time the complex behaviour of natural deposits defied the ingenuity of engineers. The time has come that, with the aid of computers, numerical methods will allow the solution of every problem if the material law can be specified with a certain accuracy. Boundary Techniques (B.E.) have recently exploded in a splendid flowering of methods and applications that compare advantegeously with other well-established procedures like the finite element method (F.E.). Its application to soil mechanics problems (Brebbia 1981) has started and will grow in the future. This paper tries to present a simple formulation to a classical problem. In fact, there is already a large amount of application of B.E. to diffusion problems (Rizzo et al, Shaw, Chang et al, Combescure et al, Wrobel et al, Roures et al, Onishi et al) and very recently the first specific application to consolidation problems has been published by Bnishi et al. Here we develop an alternative formulation to that presented in the last reference. Fundamentally the idea is to introduce a finite difference discretization in the time domain in order to use the fundamental solution of a Helmholtz type equation governing the neutral pressure distribution. Although this procedure seems to have been unappreciated in the previous technical literature it is nevertheless effective and straightforward to implement. Indeed for the special problem in study it is perfectly suited, because a step by step interaction between the elastic and flow problems is needed. It allows also the introduction of non-linear elastic properties and time dependent conditions very easily as will be shown and compares well with performances of other approaches.
Resumo:
Cuando una colectividad de sistemas dinámicos acoplados mediante una estructura irregular de interacciones evoluciona, se observan dinámicas de gran complejidad y fenómenos emergentes imposibles de predecir a partir de las propiedades de los sistemas individuales. El objetivo principal de esta tesis es precisamente avanzar en nuestra comprensión de la relación existente entre la topología de interacciones y las dinámicas colectivas que una red compleja es capaz de mantener. Siendo este un tema amplio que se puede abordar desde distintos puntos de vista, en esta tesis se han estudiado tres problemas importantes dentro del mismo que están relacionados entre sí. Por un lado, en numerosos sistemas naturales y artificiales que se pueden describir mediante una red compleja la topología no es estática, sino que depende de la dinámica que se desarrolla en la red: un ejemplo son las redes de neuronas del cerebro. En estas redes adaptativas la propia topología emerge como consecuencia de una autoorganización del sistema. Para conocer mejor cómo pueden emerger espontáneamente las propiedades comúnmente observadas en redes reales, hemos estudiado el comportamiento de sistemas que evolucionan según reglas adaptativas locales con base empírica. Nuestros resultados numéricos y analíticos muestran que la autoorganización del sistema da lugar a dos de las propiedades más universales de las redes complejas: a escala mesoscópica, la aparición de una estructura de comunidades, y, a escala macroscópica, la existencia de una ley de potencias en la distribución de las interacciones en la red. El hecho de que estas propiedades aparecen en dos modelos con leyes de evolución cuantitativamente distintas que siguen unos mismos principios adaptativos sugiere que estamos ante un fenómeno que puede ser muy general, y estar en el origen de estas propiedades en sistemas reales. En segundo lugar, proponemos una medida que permite clasificar los elementos de una red compleja en función de su relevancia para el mantenimiento de dinámicas colectivas. En concreto, estudiamos la vulnerabilidad de los distintos elementos de una red frente a perturbaciones o grandes fluctuaciones, entendida como una medida del impacto que estos acontecimientos externos tienen en la interrupción de una dinámica colectiva. Los resultados que se obtienen indican que la vulnerabilidad dinámica es sobre todo dependiente de propiedades locales, por tanto nuestras conclusiones abarcan diferentes topologías, y muestran la existencia de una dependencia no trivial entre la vulnerabilidad y la conectividad de los elementos de una red. Finalmente, proponemos una estrategia de imposición de una dinámica objetivo genérica en una red dada e investigamos su validez en redes con diversas topologías que mantienen regímenes dinámicos turbulentos. Se obtiene como resultado que las redes heterogéneas (y la amplia mayora de las redes reales estudiadas lo son) son las más adecuadas para nuestra estrategia de targeting de dinámicas deseadas, siendo la estrategia muy efectiva incluso en caso de disponer de un conocimiento muy imperfecto de la topología de la red. Aparte de la relevancia teórica para la comprensión de fenómenos colectivos en sistemas complejos, los métodos y resultados propuestos podrán dar lugar a aplicaciones en sistemas experimentales y tecnológicos, como por ejemplo los sistemas neuronales in vitro, el sistema nervioso central (en el estudio de actividades síncronas de carácter patológico), las redes eléctricas o los sistemas de comunicaciones. ABSTRACT The time evolution of an ensemble of dynamical systems coupled through an irregular interaction scheme gives rise to dynamics of great of complexity and emergent phenomena that cannot be predicted from the properties of the individual systems. The main objective of this thesis is precisely to increase our understanding of the interplay between the interaction topology and the collective dynamics that a complex network can support. This is a very broad subject, so in this thesis we will limit ourselves to the study of three relevant problems that have strong connections among them. First, it is a well-known fact that in many natural and manmade systems that can be represented as complex networks the topology is not static; rather, it depends on the dynamics taking place on the network (as it happens, for instance, in the neuronal networks in the brain). In these adaptive networks the topology itself emerges from the self-organization in the system. To better understand how the properties that are commonly observed in real networks spontaneously emerge, we have studied the behavior of systems that evolve according to local adaptive rules that are empirically motivated. Our numerical and analytical results show that self-organization brings about two of the most universally found properties in complex networks: at the mesoscopic scale, the appearance of a community structure, and, at the macroscopic scale, the existence of a power law in the weight distribution of the network interactions. The fact that these properties show up in two models with quantitatively different mechanisms that follow the same general adaptive principles suggests that our results may be generalized to other systems as well, and they may be behind the origin of these properties in some real systems. We also propose a new measure that provides a ranking of the elements in a network in terms of their relevance for the maintenance of collective dynamics. Specifically, we study the vulnerability of the elements under perturbations or large fluctuations, interpreted as a measure of the impact these external events have on the disruption of collective motion. Our results suggest that the dynamic vulnerability measure depends largely on local properties (our conclusions thus being valid for different topologies) and they show a non-trivial dependence of the vulnerability on the connectivity of the network elements. Finally, we propose a strategy for the imposition of generic goal dynamics on a given network, and we explore its performance in networks with different topologies that support turbulent dynamical regimes. It turns out that heterogeneous networks (and most real networks that have been studied belong in this category) are the most suitable for our strategy for the targeting of desired dynamics, the strategy being very effective even when the knowledge on the network topology is far from accurate. Aside from their theoretical relevance for the understanding of collective phenomena in complex systems, the methods and results here discussed might lead to applications in experimental and technological systems, such as in vitro neuronal systems, the central nervous system (where pathological synchronous activity sometimes occurs), communication systems or power grids.
Resumo:
Since the epoch-making "memoir" of Saint-Venant in 1855 the torsion of prismatic and cilindrical bars has reduced to a mathematical problem: the calculation of an analytical function satisfying prescribed boundary values. For over one century, till the first applications of the F.E.M. to the problem, the only possibility of study in irregularly shaped domains was the beatiful, but limitated, theory of complex function analysis, several functional approaches and the finite difference method. Nevertheless in 1963 Jaswon published an interestingpaper which was nearly lost between the splendid F. E.M. boom. The method was extended by Rizzo to more complicated problems and definitively incorporated to the scientific community background through several lecture-notes of Cruse recently published, but widely circulated during past years. The work of several researches has shown the tremendous possibilities of the method which is today a recognized alternative to the well established F .E. procedure. In fact, the first comprehensive attempt to cover the method, has been recently published in textbook form. This paper is a contribution to the implementation of a difficulty which arises if the isoparametric elements concept is applicated to plane potential problems with sharp corners in the boundary domain. In previous works, these problems was avoided using two principal approximations: equating the fluxes round the corner or establishing a binode element (in fact, truncating the corner). The first approximation distortes heavily the solution in thecorner neighbourhood, and a great amount of element is neccesary to reduce its influence. The second is better suited but the price payed is increasing the size of the system of equations to be solved. In this paper an alternative formulation, consistent with the shape function chosen in the isoparametric representation, is presented. For ease of comprehension the formulation has been limited to the linear element. Nevertheless its extension to more refined elements is straight forward. Also a direct procedure for the assembling of the equations is presented in an attempt to reduce the in-core computer requirements.
Resumo:
At present, engineering problems required quite a sophisticated calculation means. However, analytical models still can prove to be a useful tool for engineers and scientists when dealing with complex physical phenomena. The mathematical models developed to analyze three different engineering problems: photovoltaic devices analysis; cup anemometer performance; and high-speed train pressure wave effects in tunnels are described. In all cases, the results are quite accurate when compared to testing measurements.
Resumo:
The relationship between structural controllability and observability of complex systems is studied. Algebraic and graph theoretic tools are combined to prove the extent of some controller/observer duality results. Two types of control design problems are addressed and some fundamental theoretical results are provided. In addition new algorithms are presented to compute optimal solutions for monitoring large scale real networks.
Resumo:
En esta tesis presentamos una teoría adaptada a la simulación de fenómenos lentos de transporte en sistemas atomísticos. En primer lugar, desarrollamos el marco teórico para modelizar colectividades estadísticas de equilibrio. A continuación, lo adaptamos para construir modelos de colectividades estadísticas fuera de equilibrio. Esta teoría reposa sobre los principios de la mecánica estadística, en particular el principio de máxima entropía de Jaynes, utilizado tanto para sistemas en equilibrio como fuera de equilibrio, y la teoría de las aproximaciones del campo medio. Expresamos matemáticamente el problema como un principio variacional en el que maximizamos una entropía libre, en lugar de una energía libre. La formulación propuesta permite definir equivalentes atomísticos de variables macroscópicas como la temperatura y la fracción molar. De esta forma podemos considerar campos macroscópicos no uniformes. Completamos el marco teórico con reglas de cuadratura de Monte Carlo, gracias a las cuales obtenemos modelos computables. A continuación, desarrollamos el conjunto completo de ecuaciones que gobiernan procesos de transporte. Deducimos la desigualdad de disipación entrópica a partir de fuerzas y flujos termodinámicos discretos. Esta desigualdad nos permite identificar la estructura que deben cumplir los potenciales cinéticos discretos. Dichos potenciales acoplan las tasas de variación en el tiempo de las variables microscópicas con las fuerzas correspondientes. Estos potenciales cinéticos deben ser completados con una relación fenomenológica, del tipo definido por la teoría de Onsanger. Por último, aportamos validaciones numéricas. Con ellas ilustramos la capacidad de la teoría presentada para simular propiedades de equilibrio y segregación superficial en aleaciones metálicas. Primero, simulamos propiedades termodinámicas de equilibrio en el sistema atomístico. A continuación evaluamos la habilidad del modelo para reproducir procesos de transporte en sistemas complejos que duran tiempos largos con respecto a los tiempos característicos a escala atómica. ABSTRACT In this work, we formulate a theory to address simulations of slow time transport effects in atomic systems. We first develop this theoretical framework in the context of equilibrium of atomic ensembles, based on statistical mechanics. We then adapt it to model ensembles away from equilibrium. The theory stands on Jaynes' maximum entropy principle, valid for the treatment of both, systems in equilibrium and away from equilibrium and on meanfield approximation theory. It is expressed in the entropy formulation as a variational principle. We interpret atomistic equivalents of macroscopic variables such as the temperature and the molar fractions, wich are not required to be uniform, but can vary from particle to particle. We complement this theory with Monte Carlo summation rules for further approximation. In addition, we provide a framework for studying transport processes with the full set of equations driving the evolution of the system. We first derive a dissipation inequality for the entropic production involving discrete thermodynamic forces and fluxes. This discrete dissipation inequality identifies the adequate structure for discrete kinetic potentials which couple the microscopic field rates to the corresponding driving forces. Those kinetic potentials must finally be expressed as a phenomenological rule of the Onsanger Type. We present several validation cases, illustrating equilibrium properties and surface segregation of metallic alloys. We first assess the ability of a simple meanfield model to reproduce thermodynamic equilibrium properties in systems with atomic resolution. Then, we evaluate the ability of the model to reproduce a long-term transport process in complex systems.
Resumo:
Passengers comfort in terms of acoustic noise levels is a key train design parameter, especially relevant in high speed trains, where the aerodynamic noise is dominant. The aim of the work, described in this paper, is to make progress in the understanding of the flow field around high speed trains in an open field, which is a subject of interest for many researchers with direct industrial applications, but also the critical configuration of the train inside a tunnel is studied in order to evaluate the external loads arising from noise sources of the train. The airborne noise coming from the wheels (wheelrail interaction), which is the dominant source at a certain range of frequencies, is also investigated from the numerical and experimental points of view. The numerical prediction of the noise in the interior of the train is a very complex problem, involving many different parameters: complex geometries and materials, different noise sources, complex interactions among those sources, broad range of frequencies where the phenomenon is important, etc. During recent years a research plan is being developed at IDR/UPM (Instituto de Microgravedad Ignacio Da Riva, Universidad Politécnica de Madrid) involving both numerical simulations, wind tunnel and full-scale tests to address this problem. Comparison of numerical simulations with experimental data is a key factor in this process.
Resumo:
El presente trabajo de investigación se ocupa del estudio de las vibraciones verticales inducidas por vórtices (VIV) en aquellos puentes que, por sus características geométricas y propiedades dinámicas, muestran cierta sensibilidad este tipo de fenómeno aeroelástico. El objeto principal es el análisis del mecanismo de interacción viento-estructura sobre secciones no fuseladas de geometría simple, con objeto de realizar una adecuada caracterización del problema y poder abordar posteriormente el análisis de otras secciones de geometría más compleja, representativas de los principales elementos estructurales de los puentes, como arcos, tableros, torres y pilas. Este aspecto es fundamental durante la fase de diseño del puente, donde deberán tenerse en cuenta también una serie de detalles que pueden influir significativamente su sensibilidad ante problemas aerodinámicos, como la morfología y dimensiones principales de la sección transversal del tablero, la disposición de barreras de seguridad y barreras cortaviento, o las riostras que unen diferentes elementos estructurales. La configuración de dos elementos en tándem o la construcción de un puente en las inmediaciones de otro existente son otros aspectos a considerar respecto a la sensibilidad frente a efectos aeroelásticos. El estudio se ha llevado a cabo principalmente mediante la implementación de simulaciones numéricas que reproducen la interacción entre la corriente de aire y secciones representativas de modelos estructurales, a partir de un código CFD basado en el método de las partículas de vórtices (VPM), siguiendo por tanto un esquema Lagrangiano. Los resultados han sido validados con datos experimentales existentes, valores procedentes de ensayos en túnel de viento y registros reales a partir de diferentes casos de estudio: Alconétar (2006), Niterói (1980), Trans- Tokyo Bay (1995) y Volgogrado (2010). Finalmente, se propone un modelo semi-empírico para la estimación del rango de velocidades críticas y amplitudes de oscilación basado en la utilización de las derivadas de flameo de Scanlan, y la densidad espectral de las fuerzas aerodinámicas en el dominio de la frecuencia. The present research work concerns the study of vertical vortex-induced vibrations (VIV) in bridges which show certain sensitivity to this type of aeroelastic phenomenon. It focuses on the analysis of the wind-structure interaction mechanism on bluff sections, with the objective of making a good characterisation of the problem and subsequently addressing the analysis of sections with a complex geometry, which are representative of the bridge structural elements, such as arches, decks, towers and piers. This issue is of relative importance during the bridge design phase, since minor details of the aforementioned elements can significantly influence its sensitivity to aerodynamic problems. The shape and main dimensions of the deck cross section, the addition of safety barriers and windshields, the presence of braces to enhance the structure mechanical properties, the utilisation of cross sections in tandem arrangement, or the erection of a new bridge in the vicinity of another existing one are some of the aspects to be considered regarding the sensitivity to the aeroelastic effects. The study has been carried out mainly through the implementation of numerical simulations that reproduces the interaction between the airflow and the representative cross section of a structural bridge model, by the use of a CFD code based on the vortex particle method (VPM), thus following a Lagrangian scheme. The results have been validated with existing experimental data, values from wind tunnel tests and full scale observations from the different case studies: Alconétar (2006), Niterói (1980), Trans-Tokyo Bay (1995) and Volgograd (2010). Finally, a new semi-empirical model is proposed for the estimation of the critical wind velocity ranges and oscillation amplitudes based on the use of the Scanlan’s flutter derivatives and the power spectral density of aerodynamic force time history in the frequency domain.
Resumo:
The central problem of complex inheritance is to map oligogenes for disease susceptibility, integrating linkage and association over samples that differ in several ways. Combination of evidence over multiple samples with 1,037 families supports loci contributing to asthma susceptibility in the cytokine region on 5q [maximum logarithm of odds (lod) = 2.61 near IL-4], but no evidence for atopy. The principal problems with retrospective collaboration on linkage appear to have been solved, providing far more information than a single study. A multipoint lod table evaluated at commonly agreed reference loci is required for both collaboration and metaanalysis, but variations in ascertainment, pedigree structure, phenotype definition, and marker selection are tolerated. These methods are invariant with statistical methods that increase the power of lods and are applicable to all diseases, motivating collaboration rather than competition. In contrast to linkage, positional cloning by allelic association has yet to be extended to multiple samples, a prerequisite for efficient combination with linkage and the greatest current challenge to genetic epidemiology.
Resumo:
Gene targeting allows precise, predetermined changes to be made in a chosen gene in the mouse genome. To date, targeting has been used most often for generation of animals completely lacking the product of a gene of interest. The resulting "knockout" mice have confirmed some hypotheses, have upset others, but have rarely been uninformative. Models of several human genetic diseases have been produced by targeting--including Gaucher disease, cystic fibrosis, and the fragile X syndrome. These diseases are primarily determined by defects in single genes, and their modes of inheritance are well understood. When the disease under study has a complex etiology with multiple genetic and environmental components, the generation of animal models becomes more difficult but no less valuable. The problems associated with dissecting out the individual genetic factors also increases substantially and the distinction between causation and correlation is often difficult. To prove causation in a complex system requires rigorous adherence to the principle that the experiments must allow detection of the effects of changing only a single variable at one time. Gene targeting experiments, when properly designed, can test the effects of a precise genetic change completely free from the effects of differences in any other genes (linked or unlinked to the test gene). They therefore allow proofs of causation.