928 resultados para Complexity.
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
During the last years cities around the world have invested important quantities of money in measures for reducing congestion and car-trips. Investments which are nothing but potential solutions for the well-known urban sprawl phenomenon, also called the “development trap” that leads to further congestion and a higher proportion of our time spent in slow moving cars. Over the path of this searching for solutions, the complex relationship between urban environment and travel behaviour has been studied in a number of cases. The main question on discussion is, how to encourage multi-stop tours? Thus, the objective of this paper is to verify whether unobserved factors influence tour complexity. For this purpose, we use a data-base from a survey conducted in 2006-2007 in Madrid, a suitable case study for analyzing urban sprawl due to new urban developments and substantial changes in mobility patterns in the last years. A total of 943 individuals were interviewed from 3 selected neighbourhoods (CBD, urban and suburban). We study the effect of unobserved factors on trip frequency. This paper present the estimation of an hybrid model where the latent variable is called propensity to travel and the discrete choice model is composed by 5 alternatives of tour type. The results show that characteristics of the neighbourhoods in Madrid are important to explain trip frequency. The influence of land use variables on trip generation is clear and in particular the presence of commercial retails. Through estimation of elasticities and forecasting we determine to what extent land-use policy measures modify travel demand. Comparing aggregate elasticities with percentage variations, it can be seen that percentage variations could lead to inconsistent results. The result shows that hybrid models better explain travel behavior than traditional discrete choice models.
Resumo:
Several authors have analysed the changes of the probability density function of the solar radiation with different time resolutions. Some others have approached to study the significance of these changes when produced energy calculations are attempted. We have undertaken different transformations to four Spanish databases in order to clarify the interrelationship between radiation models and produced energy estimations. Our contribution is straightforward: the complexity of a solar radiation model needed for yearly energy calculations, is very low. Twelve values of monthly mean of solar radiation are enough to estimate energy with errors below 3%. Time resolutions better than hourly samples do not improve significantly the result of energy estimations.
Resumo:
A unified low complexity sign-bit correlation based symbol timing synchronization scheme for Multiband Orthogonal Frequency Division Multiplexing (MB-OFDM) Ultra Wideband (UWB) receiver system is proposed. By using the time domain sequence of the packet/frame synchronization preamble, the proposed scheme is in charge of detecting the upcoming MB-OFDM symbol and it estimates the exact boundary of the start of Fast Fourier Transform (FFT) window. The proposed algorithm is implemented by using an efficient Hardware-Software co-simulation methodology. The effectiveness of the proposed synchronization scheme and the optimization criteria is confirmed by hardware implementation results.
Resumo:
Alzheimer's disease (AD) is the most common cause of dementia. Over the last few years, a considerable effort has been devoted to exploring new biomarkers. Nevertheless, a better understanding of brain dynamics is still required to optimize therapeutic strategies. In this regard, the characterization of mild cognitive impairment (MCI) is crucial, due to the high conversion rate from MCI to AD. However, only a few studies have focused on the analysis of magnetoencephalographic (MEG) rhythms to characterize AD and MCI. In this study, we assess the ability of several parameters derived from information theory to describe spontaneous MEG activity from 36 AD patients, 18 MCI subjects and 26 controls. Three entropies (Shannon, Tsallis and Rényi entropies), one disequilibrium measure (based on Euclidean distance ED) and three statistical complexities (based on Lopez Ruiz–Mancini–Calbet complexity LMC) were used to estimate the irregularity and statistical complexity of MEG activity. Statistically significant differences between AD patients and controls were obtained with all parameters (p < 0.01). In addition, statistically significant differences between MCI subjects and controls were achieved by ED and LMC (p < 0.05). In order to assess the diagnostic ability of the parameters, a linear discriminant analysis with a leave-one-out cross-validation procedure was applied. The accuracies reached 83.9% and 65.9% to discriminate AD and MCI subjects from controls, respectively. Our findings suggest that MCI subjects exhibit an intermediate pattern of abnormalities between normal aging and AD. Furthermore, the proposed parameters provide a new description of brain dynamics in AD and MCI.
Resumo:
Today's motivation for autonomous systems research stems out of the fact that networked environments have reached a level of complexity and heterogeneity that make their control and management by solely human administrators more and more difficult. The optimisation of performance metrics for the air traffic management system, like in other networked system, has become more complex with increasing number of flights, capacity constraints, environmental factors and safety regulations. It is anticipated that a new structure of planning layers and the introduction of higher levels of automation will reduce complexity and will optimise the performance metrics of the air traffic management system. This paper discusses the complexity of optimising air traffic management performance metrics and proposes a way forward based on higher levels of automation.
Resumo:
In informatics there is one kind of complexity that is perceived by everyone. It is the complexity of a concrete, isolated object, normally situated completely within one of the branches universally recognized by the scientific and technical community. Examples of this are the complexity of integrated electronic circuits, the complexity of lgorithms and the complexity of software. The first complexity deals with the number of circuit components, the second with computation time and the third with the number of necessary mental discriminations. In arder to illustrate my point, I will take up the last complexity, which, m o reo ver, is the least well-known.
Resumo:
Actas.
Resumo:
Office automation is one of the fields where the complexity related with technologies and working environments can be best shown. This is the starting point we have chosen to build up a theoretical model that shows us a scene quite different from the one traditionally considered. Through the development of the model, the levels of complexity associated with office automation and office environments have been identified, establishing a relationship between them. Thus, the model allows to state a general principle for sociotechnical design of office automation systems, comprising the ontological distinctions needed to properly evaluate each particular technology and its virtual contribution to office automation. From this fact comes the model's taxonomic ability to draw a global perspective of the state-of-art in office automation technologies.
Resumo:
One medium-term strategy for helping in the management of complexity is the introduction of a conceptual complexity component in the very centre of university curricula. In very few areas is the growth of complexity as evident as in the information technologies (ITs), the focus of the work presented in the current paper. We have therefore developed an integrated way of tackling the specific field of information technologies by means of an approach,to complexity. The content of this paper describes the guidelines of our research effort, placing an emphasis on informatics. Concepts of complexity based on the system metaphor have been substantially drawn upon in this exercise and are thus presented in some detail. Also described is a didactic experiment conducted by the author and designed to provide a new and integrating approach to University curricula for future professionals. The students' "discovery" of complexity is the focal point of the experiment. The findings of this effort are encouraging and call for the continuation and expansion of this experiment.
Resumo:
The influence of CP content and ingredient complexity, feed form, and duration of feeding of the Phase I diets on growth performance and total tract apparent digestibility -TTAD- of energy and nutrients was studied in Iberian pigs weaned at 28 d of age. There were 12 dietary treatments with 2 type of feeds -high-quality, HQ; and low-quality, LQ-, 2 feed forms -pellets vs. mash-, and 3 durations -7, 14, and 21 d- of supply of the Phase I diets.
Resumo:
The Semantics Difficulty Model (SDM) is a model that measures the difficult of introducing semantics technology into a company. SDM manages three descriptions of stages, which we will refer to as ?snapshots?: a company semantic snapshot, data snapshot and semantic application snapshot. Understanding a priory the complexity of introducing semantics into a company is important because it allows the organization to take early decisions, thus saving time and money, mitigating risks and improving innovation, time to market and productivity. SDM works by measuring the distance between each initial snapshot and its reference models (the company semantic snapshots reference model, data snapshots reference model, and the semantic application snapshots reference model) with Euclidian distances. The difficulty level will be "not at all difficult" when the distance is small, and becomes "extremely difficult" when the the distance is large. SDM has been tested experimentally with 2000 simulated companies with arrangements and several initial stages. The output is measured by five linguistic values: "not at all difficult, slightly difficult, averagely difficult, very difficult and extremely difficult". As the preliminary results of our SDM simulation model indicate, transforming a search application into integrated data from different sources with semantics is a "slightly difficult", in contrast with data and opinion extraction applications for which it is "very difficult".
Resumo:
Nonlinear analysis tools for studying and characterizing the dynamics of physiological signals have gained popularity, mainly because tracking sudden alterations of the inherent complexity of biological processes might be an indicator of altered physiological states. Typically, in order to perform an analysis with such tools, the physiological variables that describe the biological process under study are used to reconstruct the underlying dynamics of the biological processes. For that goal, a procedure called time-delay or uniform embedding is usually employed. Nonetheless, there is evidence of its inability for dealing with non-stationary signals, as those recorded from many physiological processes. To handle with such a drawback, this paper evaluates the utility of non-conventional time series reconstruction procedures based on non uniform embedding, applying them to automatic pattern recognition tasks. The paper compares a state of the art non uniform approach with a novel scheme which fuses embedding and feature selection at once, searching for better reconstructions of the dynamics of the system. Moreover, results are also compared with two classic uniform embedding techniques. Thus, the goal is comparing uniform and non uniform reconstruction techniques, including the one proposed in this work, for pattern recognition in biomedical signal processing tasks. Once the state space is reconstructed, the scheme followed characterizes with three classic nonlinear dynamic features (Largest Lyapunov Exponent, Correlation Dimension and Recurrence Period Density Entropy), while classification is carried out by means of a simple k-nn classifier. In order to test its generalization capabilities, the approach was tested with three different physiological databases (Speech Pathologies, Epilepsy and Heart Murmurs). In terms of the accuracy obtained to automatically detect the presence of pathologies, and for the three types of biosignals analyzed, the non uniform techniques used in this work lightly outperformed the results obtained using the uniform methods, suggesting their usefulness to characterize non-stationary biomedical signals in pattern recognition applications. On the other hand, in view of the results obtained and its low computational load, the proposed technique suggests its applicability for the applications under study.
Resumo:
Valoración de la transferencia temporal de los modelos de distribución de especies para su aplicación en nuestros días utilizando datos paleobotánicos Corilus avellana y Alnus glutinosa.
Resumo:
Esta investigación recoge un cúmulo de intereses en torno a un modo de generar arquitectura muy específico: La producción de objetos con una forma subyacente no apriorística. Los conocimientos expuestos se apoyan en condiciones del pensamiento reciente que impulsan la ilusión por alimentar la fuente creativa de la arquitectura con otros campos del saber. Los tiempos del conocimiento animista sensible y el conocimiento objetivo de carácter científico son correlativos en la historia pero casi nunca han sido sincrónicos. Representa asimismo un intento por aunar los dos tipos de conocimiento retomando la inercia que ya se presentía a comienzos del siglo XX. Se trata por tanto, de un ensayo sobre la posible anulación de la contraposición entre estos dos mundos para pasar a una complementariedad entre ambos en una sola visión conjunta compartida. Como meta final de esta investigación se presenta el desarrollo de un sistema crítico de análisis para los objetos arquitectónicos que permita una diferenciación entre aquellos que responden a los problemas de manera completa y sincera y aquellos otros que esconden, bajo una superficie consensuada, la falta de un método resolutivo de la complejidad en el presente creativo. La Investigación observa tres grupos de conocimiento diferenciados agrupados en sus capítulos correspondientes: El primer capítulo versa sobre el Impulso Creador. En él se define la necesidad de crear un marco para el individuo creador, aquel que independientemente de las fuerzas sociales del momento presiente que existe algo más allá que está sin resolver. Denominamos aquí “creador rebelde” a un tipo de personaje reconocible a lo largo de la Historia como aquel capaz de reconocer los cambios que ese operan en su presente y que utiliza para descubrir lo nuevo y acercarse algo al origen creativo. En el momento actual ese tipo de personaje es el que intuye o ya ha intuido hace tiempo la existencia de una complejidad creciente no obviable en el pensamiento de este tiempo. El segundo capítulo desarrolla algunas Propiedades de Sistemas de actuación creativa. En él se muestra una investigación que desarrolla un marco de conocimientos científicos muy específicos de nuestro tiempo que la arquitectura, de momento, no ha absorbido ni refleja de manera directa en su manera de crear. Son temas de presencia casi ya mundana en la sociedad pero que se resisten a ser incluidos en los procesos creativos como parte de la conciencia. La mayoría de ellos hablan de precisión, órdenes invisibles, propiedades de la materia o la energía tratados de manera objetiva y apolítica. La meta final supone el acercamiento e incorporación de estos conceptos y propiedades a nuestro mundo sensible unificándolos indisociablemente bajo un solo punto de vista. El último capítulo versa sobre la Complejidad y su capacidad de reducción a lo esencial. Aquí se muestran, a modo de conclusiones, la introducción de varios conceptos para el desarrollo de un sistema crítico hacia la arquitectura de nuestro tiempo. Entre ellos, el de Complejidad Esencial, definido como aquella de carácter inevitable a la hora de responder la arquitectura a los problemas y solicitaciones crecientes a los que se enfrenta en el presente. La Tesis mantiene la importancia de informar sobre la imposibilidad en el estado actual de las cosas de responder de manera sincera con soluciones de carácter simplista y la necesidad, por tanto, de soluciones necesarias de carácter complejo. En este sentido se define asimismo el concepto de Forma Subyacente como herramienta crítica para poder evaluar la respuesta de cada arquitectura y poder tener un sistema y visión crítica sobre lo que es un objeto consistente frente a la situación a la que se enfrenta. Dicha forma subyacente se define como aquella manera de entender conjuntamente y de manera sincrónica aquello que percibimos de manera sensible inseparable de las fuerzas ocultas, creativas, tecnológicas, materiales y energéticas que sustentan la definición y entendimiento de cualquier objeto construido. ABSTRACT This research includes a cluster of interests around a specific way to generate architecture: The production of objects without an a priori underlying form. The knowledge presented is based on current conditions of thought promoting the illusion to feed the creative source of architecture with other fields of knowledge. The sensible animist knowledge and objective scientific knowledge are correlative in history but have rarely been synchronous. This research is also an attempt to combine both types of knowledge to regain the inertia already sensed in the early twentieth century. It is therefore an essay on the annulment of the opposition between these two worlds to move towards complementarities of both in a single shared vision. The ultimate goal of this research is to present the development of a critical analysis system for architectural objects that allows differentiation between those who respond to the problems sincerely and those who hide under an agreed appearance, the lack of a method for solving the complexity of the creative present. The research observes three distinct groups of knowledge contained in their respective chapters: The first chapter deals with the Creative Impulse. In it is defined the need to create a framework for the creative individual who, regardless of the current social forces, forebodes that there is something hidden beyond which is still unresolved. We define the "rebel creator" as a kind of person existing throughout history who is able to recognize the changes operating in its present and use them to discover something new and get closer to the origin of creation. At present, this type of character is the one who intuits the existence of a non obviable increasing complexity in society and thought. The second chapter presents some systems, and their properties, for creative performance. It describes the development of a framework composed of current scientific knowledge that architecture has not yet absorbed or reflected directly in her procedures. These are issues of common presence in society but are still reluctant to be included in the creative processes even if they already belong to the collective consciousness. Most of them talk about accuracy, invisible orders, properties of matter and energy, always treated from an objective and apolitical perspective. The ultimate goal pursues the approach and incorporation of these concepts and properties to the sensible world, inextricably unifying all under a single point of view. The last chapter deals with complexity and the ability to reduce it to the essentials. Here we show, as a conclusion, the introduction of several concepts to develop a critical approach to analyzing the architecture of our time. Among them, the concept of Essential Complexity, defined as one that inevitably arises when architecture responds to the increasing stresses that faces today. The thesis maintains the importance of reporting, in the present state of things, the impossibility to respond openly with simplistic solutions and, therefore, the need for solutions to complex character. In this sense, the concept of Underlying Form is defined as a critical tool to evaluate the response of each architecture and possess a critical system to clarify what is an consistent object facing a certain situation. The underlying form is then defined as a way to synchronously understand what we perceive sensitively inseparable from the hidden forces of creative, technological, material and energetic character that support the definition and understanding of any constructed object.