960 resultados para Complex system


Relevância:

60.00% 60.00%

Publicador:

Resumo:

An optimal control law for a general nonlinear system can be obtained by solving Hamilton-Jacobi-Bellman equation. However, it is difficult to obtain an analytical solution of this equation even for a moderately complex system. In this paper, we propose a continuoustime single network adaptive critic scheme for nonlinear control affine systems where the optimal cost-to-go function is approximated using a parametric positive semi-definite function. Unlike earlier approaches, a continuous-time weight update law is derived from the HJB equation. The stability of the system is analysed during the evolution of weights using Lyapunov theory. The effectiveness of the scheme is demonstrated through simulation examples.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences these parameters. Hence the embedded system designer performs a complete memory architecture exploration. This problem is a multi-objective optimization problem and can be tackled as a two-level optimization problem. The outer level explores various memory architecture while the inner level explores placement of data sections (data layout problem) to minimize memory stalls. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of Multi-objective Genetic Algorithm (Memory Architecture exploration) and an efficient heuristic data placement algorithm. At the outer level the memory architecture exploration is done by picking memory modules directly from a ASIC memory Library. This helps in performing the memory architecture exploration in a integrated framework, where the memory allocation, memory exploration and data layout works in a tightly coupled way to yield optimal design points with respect to area, power and performance. We experimented our approach for 3 embedded applications and our approach explores several thousand memory architecture for each application, yielding a few hundred optimal design points in a few hours of computation time on a standard desktop.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Precision, sophistication and economic factors in many areas of scientific research that demand very high magnitude of compute power is the order of the day. Thus advance research in the area of high performance computing is getting inevitable. The basic principle of sharing and collaborative work by geographically separated computers is known by several names such as metacomputing, scalable computing, cluster computing, internet computing and this has today metamorphosed into a new term known as grid computing. This paper gives an overview of grid computing and compares various grid architectures. We show the role that patterns can play in architecting complex systems, and provide a very pragmatic reference to a set of well-engineered patterns that the practicing developer can apply to crafting his or her own specific applications. We are not aware of pattern-oriented approach being applied to develop and deploy a grid. There are many grid frameworks that are built or are in the process of being functional. All these grids differ in some functionality or the other, though the basic principle over which the grids are built is the same. Despite this there are no standard requirements listed for building a grid. The grid being a very complex system, it is mandatory to have a standard Software Architecture Specification (SAS). We attempt to develop the same for use by any grid user or developer. Specifically, we analyze the grid using an object oriented approach and presenting the architecture using UML. This paper will propose the usage of patterns at all levels (analysis. design and architectural) of the grid development.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This is a transient two-dimensional numerical study of double-diffusive salt fingers in a two-layer heat-salt system for a wide range of initial density stability ratio (R-rho 0) and thermal Rayleigh numbers (Ra-T similar to 10(3) - 10(11)). Salt fingers have been studied for several decades now, but several perplexing features of this rich and complex system remain unexplained. The work in question studies this problem and shows the morphological variation in fingers from low to high thermal Rayleigh numbers, which have been missed by the previous investigators. Considerable variations in convective structures and evolution pattern were observed in the range of Ra-T used in the simulation. Evolution of salt fingers was studied by monitoring the finger structures, kinetic energy, vertical profiles, velocity fields, and transient variation of R-rho(t). The results show that large scale convection that limits the finger length was observed only at high Rayleigh numbers. The transition from nonlinear to linear convection occurs at about Ra-T similar to 10(8). Contrary to the popular notion, R-rho(t) first decrease during diffusion before the onset time and then increase when convection begins at the interface. Decrease in R-rho(t) is substantial at low Ra-T and it decreases even below unity resulting in overturning of the system. Interestingly, all the finger system passes through the same state before the onset of convection irrespective of Rayleigh number and density stability ratio of the system. (C) 2014 AIP Publishing LLC.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Resumen: Michael Behe y William Dembski son dos de los líderes de la Teoría del Diseño Inteligente, una propuesta surgida como respuesta a los modelos evolucionistas y anti-finalistas prevalentes en ciertos ambientes académicos e intelectuales, especialmente del mundo anglosajón. Las especulaciones de Behe descansan en el concepto de “sistema de complejidad irreductible”, entendido como un conjunto ordenado de partes cuya funcionalidad depende estrictamente de su indemnidad estructural, y que su origen resulta, por tanto, refractario a explicaciones gradualistas. Estos sistemas, según Behe, están presentes en los vivientes, lo que permitiría inferir que ellos no son el producto de mecanismos ciegos y azarosos, sino el resultado de un diseño. Dembski, por su parte, ha abordado el problema desde una perspectiva más cuantitativa, desarrollando un algoritmo probabilístico conocido como “filtro explicatorio”, que permitiría, según el autor, inferir científicamente la presencia de un diseño, tanto en entidades artificiales como naturales. Trascendiendo las descalificaciones del neodarwinismo, examinamos la propuesta de estos autores desde los fundamentos filosóficos de la escuela tomista. A nuestro parecer, hay en el trabajo de estos autores algunas intuiciones valiosas, las que sin embargo suelen pasar desapercibidas por la escasa formalidad en que vienen presentadas, y por la aproximación eminentemente mecanicista y artefactual con que ambos enfrentan la cuestión. Es precisamente a la explicitación de tales intuiciones a las que se dirige el artículo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The vibration analysis of an elastic container with partially filled fluid was investigated in this paper. The container is made of a thin cylinder and two circular plates at the ends. The axis of the cylinder is in the horizontal direction. It is difficult to solve this problem because the complex system is not axially symmetric. The equations of motion for this system were derived. An incompressible and ideal fluid model is used in the present work. Solutions of the equations were obtained by the generalized variational method. The solution was expressed in a series of normalized generalized Fourier's functions. This series converged rapidly, and so its approximate solution was obtained with high precision. The agreement of the calculated values with the experimental result is good. It should be mentioned that with our method, the computer time is less than that with the finite-element method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work the state of the art of the automatic dialogue strategy management using Markov decision processes (MDP) with reinforcement learning (RL) is described. Partially observable Markov decision processes (POMDP) are also described. To test the validity of these methods, two spoken dialogue systems have been developed. The first one is a spoken dialogue system for weather forecast providing, and the second one is a more complex system for train information. With the first system, comparisons between a rule-based system and an automatically trained system have been done, using a real corpus to train the automatic strategy. In the second system, the scalability of these methods when used in larger systems has been tested.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The hydrological response of a catchment to rainfall on different timescales is result of a complex system involving a range of physical processes which may operate simultaneously and have different spatial and temporal influences. This paper presents the analysis of streamflow response of a small humid-temperate catchment (Aixola, 4.8 km(2)) in the Basque Country on different timescales and discusses the role of the controlling factors. Firstly, daily time series analysis was used to establish a hypothesis on the general functioning of the catchment through the relationship between precipitation and discharge on an annual and multiannual scale (2003-2008). Second, rainfall-runoff relationships and relationships among several hydrological variables, including catchment antecedent conditions, were explored at the event scale (222 events) to check and improve the hypothesis. Finally, the evolution of electrical conductivity (EC) during some of the monitored storm events (28 events) was examined to identify the time origin of waters. Quick response of the catchment to almost all the rainfall events as well as a considerable regulation capacity was deduced from the correlation and spectral analyses. These results agree with runoff event scale data analysis; however, the event analysis revealed the non-linearity of the system, as antecedent conditions play a significant role in this catchment. Further, analysis at the event scale made possible to clarify factors controlling (precipitation, precipitation intensity and initial discharge) the different aspects of the runoff response (runoff coefficient and discharge increase) for this catchment. Finally, the evolution of EC of the waters enabled the time origin (event or pre-event waters) of the quickflow to be established; specifically, the conductivity showed that pre-event waters usually represent a high percentage of the total discharge during runoff peaks. The importance of soil waters in the catchment is being studied more deeply.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The smart grid is a highly complex system that is being formed from the traditional power grid, adding new and sophisticated communication and control devices. This will enable integrating new elements for distributed power generation and also achieving an increasingly automated operation so for actions of the utilities as for customers. In order to model such systems a bottom-up method is followed, using only a few basic elements which are structured into two layers: a physical layer for the electrical power transmission, and one logical layer for element communication. A simple case study is presented to analyse the possibilities of simulation. It shows a microgrid model with dynamic load management and an integrated approach that can process both electrical and communication flows.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Attempts to model any present or future power grid face a huge challenge because a power grid is a complex system, with feedback and multi-agent behaviors, integrated by generation, distribution, storage and consumption systems, using various control and automation computing systems to manage electricity flows. Our approach to modeling is to build upon an established model of the low voltage electricity network which is tested and proven, by extending it to a generalized energy model. But, in order to address the crucial issues of energy efficiency, additional processes like energy conversion and storage, and further energy carriers, such as gas, heat, etc., besides the traditional electrical one, must be considered. Therefore a more powerful model, provided with enhanced nodes or conversion points, able to deal with multidimensional flows, is being required. This article addresses the issue of modeling a local multi-carrier energy network. This problem can be considered as an extension of modeling a low voltage distribution network located at some urban or rural geographic area. But instead of using an external power flow analysis package to do the power flow calculations, as used in electric networks, in this work we integrate a multiagent algorithm to perform the task, in a concurrent way to the other simulation tasks, and not only for the electric fluid but also for a number of additional energy carriers. As the model is mainly focused in system operation, generation and load models are not developed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.

It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.

The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Marine stratocumulus clouds are generally optically thick and shallow, exerting a net cooling influence on climate. Changes in atmospheric aerosol levels alter cloud microphysics (e.g., droplet size) and cloud macrophysics (e.g., liquid water path, cloud thickness), thereby affecting cloud albedo and Earth’s radiative balance. To understand the aerosol-cloud-precipitation interactions and to explore the dynamical effects, three-dimensional large-eddy simulations (LES) with detailed bin-resolved microphysics are performed to explore the diurnal variation of marine stratocumulus clouds under different aerosol levels and environmental conditions. It is shown that the marine stratocumulus cloud albedo is sensitive to aerosol perturbation under clean background conditions, and to environmental conditions such as large-scale divergence rate and free tropospheric humidity.

Based on the in-situ Eastern Pacific Emitted Aerosol Cloud Experiment (E-PEACE) during Jul. and Aug. 2011, and A-Train satellite observation of 589 individual ship tracks during Jun. 2006-Dec. 2009, an analysis of cloud albedo responses in ship tracks is presented. It is found that the albedo response in ship tracks depends on the mesoscale cloud structure, the free tropospheric humidity, and cloud top height. Under closed cell structure (i.e., cloud cells ringed by a perimeter of clear air), with sufficiently dry air above cloud tops and/or higher cloud top heights, the cloud albedo can become lower in ship tracks. Based on the satellite data, nearly 25% of ship tracks exhibited a decreased albedo. The cloud macrophysical responses are crucial in determining both the strength and the sign of the cloud albedo response to aerosols.

To understand the aerosol indirect effects on global marine warm clouds, multisensory satellite observations, including CloudSat, MODIS, CALIPSO, AMSR-E, ECMWF, CERES, and NCEP, have been applied to study the sensitivity of cloud properties to aerosol levels and to large scale environmental conditions. With an estimate of anthropogenic aerosol fraction, the global aerosol indirect radiative forcing has been assessed.

As the coupling among aerosol, cloud, precipitation, and meteorological conditions in the marine boundary layer is complex, the integration of LES modeling, in-situ aircraft measurements, and global multisensory satellite data analyses improves our understanding of this complex system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Power system is at the brink of change. Engineering needs, economic forces and environmental factors are the main drivers of this change. The vision is to build a smart electrical grid and a smarter market mechanism around it to fulfill mandates on clean energy. Looking at engineering and economic issues in isolation is no longer an option today; it needs an integrated design approach. In this thesis, I shall revisit some of the classical questions on the engineering operation of power systems that deals with the nonconvexity of power flow equations. Then I shall explore some issues of the interaction of these power flow equations on the electricity markets to address the fundamental issue of market power in a deregulated market environment. Finally, motivated by the emergence of new storage technologies, I present an interesting result on the investment decision problem of placing storage over a power network. The goal of this study is to demonstrate that modern optimization and game theory can provide unique insights into this complex system. Some of the ideas carry over to applications beyond power systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

No âmbito do ensino-aprendizagem de línguas adicionais, pesquisas acerca do desenvolvimento da oralidade têm demonstrado que se trata de um fenômeno multidimensional. Nakatani (2010) mostrou que o domínio de estratégias comunicacionais são indicadores de desempenho linguístico e se relacionam com a proficiência do aprendiz; Kang, Rubin e Pickering (2010) observaram que os traços fonológicos afetam a percepção sobre inteligibilidade e proficiência; Hewitt e Stephenson (2011), e Ahmadian (2012) indicaram que as condições psicológicas individuais interferem na qualidade da produção oral. Escribano (2004) sugeriu que a referência contextual é essencial na construção de sentido; Gao (2011) apontou os benefícios do ensino baseado na construção do sentido, a partir de metáforas conceptuais (LAKOFF e JOHNSON, 1980), codificação dupla (CLARK e PAIVIO, 1991) e esquemas imagéticos (LAKOFF, 1987); e Ellis e Ferreira-Junior (2009) demonstraram que as construções exibem efeitos de recência e priming, afetando o uso da linguagem dos parceiros interacionais. Tais estudos apontam para a natureza complexa da aquisição de L2, mas o fazem dentro do paradigma experimental da psicolinguística. Já Larsen-Freeman (2006), demonstra que a fluência, a precisão e a complexidade desenvolvem-se com o tempo, com alto grau de variabilidade, dentro do paradigma da Teoria da Complexidade. Em viés semelhante, Paiva (2011) observa que os sistemas de Aquisição de Segunda Língua (ASL) são auto-organizáveis. Esses trabalhos, no entanto, não abordaram aprendizes de L2 com proficiência inicial, como pretendo fazer aqui. Tendo como referenciais teóricos a Teoria da Complexidade e a Linguística Cognitiva, o presente trabalho apresenta um estudo de caso, qualitativo-interpretativista, com nuances quantitativos, que discute os processos de adaptação que emergiram na expressão oral de um grupo de aprendizes iniciantes de inglês como língua adicional no contexto vocacional. Parte do entendimento de que na sala de aula vários (sub)sistemas complexos coocorrem, covariando e coadaptando-se em diferentes níveis. A investigação contou com dados transcritos de três avaliações coletados ao longo de 28 horas de aula, no domínio ENTREVISTA DE EMPREGO. Após observar a produção oral das aprendizes, criei uma taxonomia para categorizar as adaptações que ocorreram na sintaxe, semântica, fonologia e pragmática da língua-alvo. Posteriormente organizei as categorias em níveis de prototipicidade (ROSCH et al, 1976) de acordo com as adaptações mais frequentes. Finalmente, avaliei a inteligibilidade de cada elocução, classificando-as em três níveis. A partir desses dados, descrevi como a prática oral dessas participantes emergiu e se desenvolveu ao longo das 28 horas. Os achados comprovam uma das premissas da Linguística Cognitiva ao mostrarem que os níveis de descrição linguística funcionam conjuntamente em prol do sucesso comunicacional. Além disso, demonstram que a função do professor, como discutem LARSEN-FREEMAN e CAMERON (2008), não é gerar uniformidade, mas sim oportunizar vivências que estabeleçam continuidade entre o mundo, o corpo e a mente

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O reconhecimento de padões é uma área da inteligência computacional que apoia a resolução de problemas utilizando ferramentas computacionais. Dentre esses problemas podem ser citados o reconhecimento de faces, a identificação de impressões digitais e a autenticação de assinaturas. A autenticação de assinaturas de forma automática tem sua relevância pois está ligada ao reconhecimento de indivíduos e suas credenciais em sistemas complexos e a questões financeiras. Neste trabalho é apresentado um estudo dos parâmetros do Dynamic Time Warping, um algoritmo utilizado para alinhar duas assinaturas e medir a similaridade existente entre elas. Variando-se os principais parâmetros desse algoritmo, sobre uma faixa ampla de valores, foram obtidas as médias dos resultados de erros na classificação, e assim, estas médias foram avaliadas. Com base nas primeiras avaliação, foi identificada a necessidade de se calcular um desses parâmetros de forma dinâmica, o gap cost, a fim de ajustá-lo no uso de uma aplicação prática. Uma proposta para a realização deste cálculo é apresentada e também avaliada. É também proposta e avaliada uma maneira alternativa de representação dos atributos da assinatura, de forma a considerar sua curvatura em cada ponto adquirido no processo de aquisição, utilizando os vetores normais como forma de representação. As avaliações realizadas durante as diversas etapas do estudo consideraram o Equal Error Rate (EER) como indicação de qualidade e as técnicas propostas foram comparadas com técnicas já estabelecidas, obtendo uma média percentual de EER de 3,47%.