952 resultados para dynamical scaling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method that provides athree-dimensional representation ofthe basin ofattraction of a dynamical system from experimen tal data was applied tothe problem ofdynamic balance restoration. The method isbased onthe density ofthe data onthe phase space ofthe system under study and makes use ofmodeling and numerical curve fittingtools.For the dynamical system ofbalance restora tion,the shape and the size of the basin of attraction depend on the dynamics of the postural restoring mechanisms and contain important information regarding the biomechanical,as well as the neuromuscular condition of the individual. The aim ofthis work was toexamine the ability ofthe method todetect, through the observed changes inthe shape and/or the size ofthe calculated basins of attraction, (a)the inherent differences between different systems (in the current application, postural restoring systems of different individuals)and (b)induced chan ges in the same system (thepostural restoring system of an individual).The results ofthe study confirm the validity of the method and furthermore justify its robustness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work aims to develop a novel Cross-Entropy (CE) optimization-based fuzzy controller for Unmanned Aerial Monocular Vision-IMU System (UAMVIS) to solve the seeand- avoid problem using its accurate autonomous localization information. The function of this fuzzy controller is regulating the heading of this system to avoid the obstacle, e.g. wall. In the Matlab Simulink-based training stages, the Scaling Factor (SF) is adjusted according to the specified task firstly, and then the Membership Function (MF) is tuned based on the optimized Scaling Factor to further improve the collison avoidance performance. After obtained the optimal SF and MF, 64% of rules has been reduced (from 125 rules to 45 rules), and a large number of real flight tests with a quadcopter have been done. The experimental results show that this approach precisely navigates the system to avoid the obstacle. To our best knowledge, this is the first work to present the optimized fuzzy controller for UAMVIS using Cross-Entropy method in Scaling Factors and Membership Functions optimization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eye-safety requirements in important applications like LIDAR or Free Space Optical Communications make specifically interesting the generation of high power, short optical pulses at 1.5 um. Moreover, high repetition rates allow reducing the error and/or the measurement time in applications involving pulsed time-of-flight measurements, as range finders, 3D scanners or traffic velocity controls. The Master Oscillator Power Amplifier (MOPA) architecture is an interesting source for these applications since large changes in output power can be obtained at GHz rates with a relatively small modulation of the current in the Master Oscillator (MO). We have recently demonstrated short optical pulses (100 ps) with high peak power (2.7 W) by gain switching the MO of a monolithically integrated 1.5 um MOPA. Although in an integrated MOPA the laser and the amplifier are ideally independent devices, compound cavity effects due to the residual reflectance at the different interfaces are often observed, leading to modal instabilities such as self-pulsations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lagrangian descriptors are a recent technique which reveals geometrical structures in phase space and which are valid for aperiodically time dependent dynamical systems. We discuss a general methodology for constructing them and we discuss a "heuristic argument" that explains why this method is successful. We support this argument by explicit calculations on a benchmark problem. Several other benchmark examples are considered that allow us to assess the performance of Lagrangian descriptors with both finite time Lyapunov exponents (FTLEs) and finite time averages of certain components of the vector field ("time averages"). In all cases Lagrangian descriptors are shown to be both more accurate and computationally efficient than these methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El entorno espacial actual hay un gran numero de micro-meteoritos y basura espacial generada por el hombre, lo cual plantea un riesgo para la seguridad de las operaciones en el espacio. La situación se agrava continuamente a causa de las colisiones de basura espacial en órbita, y los nuevos lanzamientos de satélites. Una parte significativa de esta basura son satélites muertos, y fragmentos de satélites resultantes de explosiones y colisiones de objetos en órbita. La mitigación de este problema se ha convertido en un tema de preocupación prioritario para todas las instituciones que participan en operaciones espaciales. Entre las soluciones existentes, las amarras electrodinámicas (EDT) proporcionan un eficiente dispositivo para el rápido de-orbitado de los satélites en órbita terrestre baja (LEO), al final de su vida útil. El campo de investigación de las amarras electrodinámicas (EDT) ha sido muy fructífero desde los años 70. Gracias a estudios teóricos, y a misiones para la demostración del funcionamiento de las amarras en órbita, esta tecnología se ha desarrollado muy rápidamente en las últimas décadas. Durante este período de investigación, se han identificado y superado múltiples problemas técnicos de diversa índole. Gran parte del funcionamiento básico del sistema EDT depende de su capacidad de supervivencia ante los micro-meteoritos y la basura espacial. Una amarra puede ser cortada completamente por una partícula cuando ésta tiene un diámetro mínimo. En caso de corte debido al impacto de partículas, una amarra en sí misma, podría ser un riesgo para otros satélites en funcionamiento. Por desgracia, tras varias demostraciones en órbita, no se ha podido concluir que este problema sea importante para el funcionamiento del sistema. En esta tesis, se presenta un análisis teórico de la capacidad de supervivencia de las amarras en el espacio. Este estudio demuestra las ventajas de las amarras de sección rectangular (cinta), en cuanto a la probabilidad de supervivencia durante la misión, frente a las amarras convencionales (cables de sección circular). Debido a su particular geometría (longitud mucho mayor que la sección transversal), una amarra puede tener un riesgo relativamente alto de ser cortado por un único impacto con una partícula de pequeñas dimensiones. Un cálculo analítico de la tasa de impactos fatales para una amarra cilindrica y de tipo cinta de igual longitud y masa, considerando el flujo de partículas de basura espacial del modelo ORDEM2000 de la NASA, muestra mayor probabilidad de supervivencia para las cintas. Dicho análisis ha sido comparado con un cálculo numérico empleando los modelos de flujo el ORDEM2000 y el MASTER2005 de ESA. Además se muestra que, para igual tiempo en órbita, una cinta tiene una probabilidad de supervivencia un orden y medio de magnitud mayor que una amarra cilindrica con igual masa y longitud. Por otra parte, de-orbitar una cinta desde una cierta altitud, es mucho más rápido, debido a su mayor perímetro que le permite capturar más corriente. Este es un factor adicional que incrementa la probabilidad de supervivencia de la cinta, al estar menos tiempo expuesta a los posibles impactos de basura espacial. Por este motivo, se puede afirmar finalmente y en sentido práctico, que la capacidad de supervivencia de la cinta es bastante alta, en comparación con la de la amarra cilindrica. El segundo objetivo de este trabajo, consiste en la elaboración de un modelo analítico, mejorando la aproximación del flujo de ORDEM2000 y MASTER2009, que permite calcular con precisión, la tasa de impacto fatal al año para una cinta en un rango de altitudes e inclinaciones, en lugar de unas condiciones particulares. Se obtiene el numero de corte por un cierto tiempo en función de la geometría de la cinta y propiedades de la órbita. Para las mismas condiciones, el modelo analítico, se compara con los resultados obtenidos del análisis numérico. Este modelo escalable ha sido esencial para la optimización del diseño de la amarra para las misiones de de-orbitado de los satélites, variando la masa del satélite y la altitud inicial de la órbita. El modelo de supervivencia se ha utilizado para construir una función objetivo con el fin de optimizar el diseño de amarras. La función objectivo es el producto del cociente entre la masa de la amarra y la del satélite y el numero de corte por un cierto tiempo. Combinando el modelo de supervivencia con una ecuación dinámica de la amarra donde aparece la fuerza de Lorentz, se elimina el tiempo y se escribe la función objetivo como función de la geometría de la cinta y las propietades de la órbita. Este modelo de optimización, condujo al desarrollo de un software, que esta en proceso de registro por parte de la UPM. La etapa final de este estudio, consiste en la estimación del número de impactos fatales, en una cinta, utilizando por primera vez una ecuación de límite balístico experimental. Esta ecuación ha sido desarollada para cintas, y permite representar los efectos tanto de la velocidad de impacto como el ángulo de impacto. Los resultados obtenidos demuestran que la cinta es altamente resistente a los impactos de basura espacial, y para una cinta con una sección transversal definida, el número de impactos críticos debidos a partículas no rastreables es significativamente menor. ABSTRACT The current space environment, consisting of man-made debris and tiny meteoroids, poses a risk to safe operations in space, and the situation is continuously deteriorating due to in-orbit debris collisions and to new satellite launches. Among these debris a significant portion is due to dead satellites and fragments of satellites resulted from explosions and in-orbit collisions. Mitigation of space debris has become an issue of first concern for all the institutions involved in space operations. Bare electrodynamic tethers (EDT) can provide an efficient mechanism for rapid de-orbiting of defunct satellites from low Earth orbit (LEO) at end of life. The research on EDT has been a fruitful field since the 70’s. Thanks to both theoretical studies and in orbit demonstration missions, this technology has been developed very fast in the following decades. During this period, several technical issues were identified and overcome. The core functionality of EDT system greatly depends on their survivability to the micrometeoroids and orbital debris, and a tether can become itself a kind of debris for other operating satellites in case of cutoff due to particle impact; however, this very issue is still inconclusive and conflicting after having a number of space demonstrations. A tether can be completely cut by debris having some minimal diameter. This thesis presents a theoretical analysis of the survivability of tethers in space. The study demonstrates the advantages of tape tethers over conventional round wires particularly on the survivability during the mission. Because of its particular geometry (length very much larger than cross-sectional dimensions), a tether may have a relatively high risk of being severed by the single impact of small debris. As a first approach to the problem, survival probability has been compared for a round and a tape tether of equal mass and length. The rates of fatal impact of orbital debris on round and tape tether, evaluated with an analytical approximation to debris flux modeled by NASA’s ORDEM2000, shows much higher survival probability for tapes. A comparative numerical analysis using debris flux model ORDEM2000 and ESA’s MASTER2005 shows good agreement with the analytical result. It also shows that, for a given time in orbit, a tape has a probability of survival of about one and a half orders of magnitude higher than a round tether of equal mass and length. Because de-orbiting from a given altitude is much faster for the tape due to its larger perimeter, its probability of survival in a practical sense is quite high. As the next step, an analytical model derived in this work allows to calculate accurately the fatal impact rate per year for a tape tether. The model uses power laws for debris-size ranges, in both ORDEM2000 and MASTER2009 debris flux models, to calculate tape tether survivability at different LEO altitudes. The analytical model, which depends on tape dimensions (width, thickness) and orbital parameters (inclinations, altitudes) is then compared with fully numerical results for different orbit inclinations, altitudes and tape width for both ORDEM2000 and MASTER2009 flux data. This scalable model not only estimates the fatal impact count but has proved essential in optimizing tether design for satellite de-orbit missions varying satellite mass and initial orbital altitude and inclination. Within the frame of this dissertation, a simple analysis has been finally presented, showing the scalable property of tape tether, thanks to the survivability model developed, that allows analyze and compare de-orbit performance for a large range of satellite mass and orbit properties. The work explicitly shows the product of tether-to-satellite mass-ratio and fatal impact count as a function of tether geometry and orbital parameters. Combining the tether dynamic equation involving Lorentz drag with space debris impact survivability model, eliminates time from the expression. Hence the product, is independent of tether de-orbit history and just depends on mission constraints and tether length, width and thickness. This optimization model finally led to the development of a friendly software tool named BETsMA, currently in process of registration by UPM. For the final step, an estimation of fatal impact rate on a tape tether has been done, using for the first time an experimental ballistic limit equation that was derived for tapes and accounts for the effects of both the impact velocity and impact angle. It is shown that tape tethers are highly resistant to space debris impacts and considering a tape tether with a defined cross section, the number of critical events due to impact with non-trackable debris is always significantly low.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Division of labor is a widely studied aspect of colony behavior of social insects. Division of labor models indicate how individuals distribute themselves in order to perform different tasks simultaneously. However, models that study division of labor from a dynamical system point of view cannot be found in the literature. In this paper, we define a division of labor model as a discrete-time dynamical system, in order to study the equilibrium points and their properties related to convergence and stability. By making use of this analytical model, an adaptive algorithm based on division of labor can be designed to satisfy dynamic criteria. In this way, we have designed and tested an algorithm that varies the response thresholds in order to modify the dynamic behavior of the system. This behavior modification allows the system to adapt to specific environmental and collective situations, making the algorithm a good candidate for distributed control applications. The variable threshold algorithm is based on specialization mechanisms. It is able to achieve an asymptotically stable behavior of the system in different environments and independently of the number of individuals. The algorithm has been successfully tested under several initial conditions and number of individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El propósito de esta tesis fue estudiar el rendimiento ofensivo de los equipos de balonmano de élite cuando se considera el balonmano como un sistema dinámico complejo no lineal. La perspectiva de análisis dinámica dependiente del tiempo fue adoptada para evaluar el rendimiento de los equipos durante el partido. La muestra general comprendió los 240 partidos jugados en la temporada 2011-2012 de la liga profesional masculina de balonmano de España (Liga ASOBAL). En el análisis posterior solo se consideraron los partidos ajustados (diferencia final de goles ≤ 5; n = 142). El estado del marcador, la localización del partido, el nivel de los oponentes y el periodo de juego fueron incorporados al análisis como variables situacionales. Tres estudios compusieron el núcleo de la tesis. En el primer estudio, analizamos la coordinación entre las series temporales que representan el proceso goleador a lo largo del partido de cada uno de los dos equipos que se enfrentan. Autocorrelaciones, correlaciones cruzadas, doble media móvil y transformada de Hilbert fueron usadas para el análisis. El proceso goleador de los equipos presentó una alta consistencia a lo largo de todos los partidos, así como fuertes modos de coordinación en fase en todos los contextos de juego. Las únicas diferencias se encontraron en relación al periodo de juego. La coordinación en los procesos goleadores de los equipos fue significativamente menor en el 1er y 2º periodo (0–10 min y 10–20 min), mostrando una clara coordinación creciente a medida que el partido avanzaba. Esto sugiere que son los 20 primeros minutos aquellos que rompen los partidos. En el segundo estudio, analizamos los efectos temporales (efecto inmediato, a corto y a medio plazo) de los tiempos muertos en el rendimiento goleador de los equipos. Modelos de regresión lineal múltiple fueron empleados para el análisis. Los resultados mostraron incrementos de 0.59, 1.40 y 1.85 goles para los periodos que comprenden la primera, tercera y quinta posesión de los equipos que pidieron el tiempo muerto. Inversamente, se encontraron efectos significativamente negativos para los equipos rivales, con decrementos de 0.50, 1.43 y 2.05 goles en los mismos periodos respectivamente. La influencia de las variables situacionales solo se registró en ciertos periodos de juego. Finalmente, en el tercer estudio, analizamos los efectos temporales de las exclusiones de los jugadores sobre el rendimiento goleador de los equipos, tanto para los equipos que sufren la exclusión (inferioridad numérica) como para los rivales (superioridad numérica). Se emplearon modelos de regresión lineal múltiple para el análisis. Los resultados mostraron efectos negativos significativos en el número de goles marcados por los equipos con un jugador menos, con decrementos de 0.25, 0.40, 0.61, 0.62 y 0.57 goles para los periodos que comprenden el primer, segundo, tercer, cuarto y quinto minutos previos y posteriores a la exclusión. Para los rivales, los resultados mostraron efectos positivos significativos, con incrementos de la misma magnitud en los mismos periodos. Esta tendencia no se vio afectada por el estado del marcador, localización del partido, nivel de los oponentes o periodo de juego. Los incrementos goleadores fueron menores de lo que se podría esperar de una superioridad numérica de 2 minutos. Diferentes teorías psicológicas como la paralización ante situaciones de presión donde se espera un gran rendimiento pueden ayudar a explicar este hecho. Los últimos capítulos de la tesis enumeran las conclusiones principales y presentan diferentes aplicaciones prácticas que surgen de los tres estudios. Por último, se presentan las limitaciones y futuras líneas de investigación. ABSTRACT The purpose of this thesis was to investigate the offensive performance of elite handball teams when considering handball as a complex non-linear dynamical system. The time-dependent dynamic approach was adopted to assess teams’ performance during the game. The overall sample comprised the 240 games played in the season 2011-2012 of men’s Spanish Professional Handball League (ASOBAL League). In the subsequent analyses, only close games (final goal-difference ≤ 5; n = 142) were considered. Match status, game location, quality of opposition, and game period situational variables were incorporated into the analysis. Three studies composed the core of the thesis. In the first study, we analyzed the game-scoring coordination between the time series representing the scoring processes of the two opposing teams throughout the game. Autocorrelation, cross-correlation, double moving average, and Hilbert transform were used for analysis. The scoring processes of the teams presented a high consistency across all the games as well as strong in-phase modes of coordination in all the game contexts. The only differences were found when controlling for the game period. The coordination in the scoring processes of the teams was significantly lower for the 1st and 2nd period (0–10 min and 10–20 min), showing a clear increasing coordination behavior as the game progressed. This suggests that the first 20 minutes are those that break the game-scoring. In the second study, we analyzed the temporal effects (immediate effect, short-term effect, and medium-term effect) of team timeouts on teams’ scoring performance. Multiple linear regression models were used for the analysis. The results showed increments of 0.59, 1.40 and 1.85 goals for the periods within the first, third and fifth timeout ball possessions for the teams that requested the timeout. Conversely, significant negative effects on goals scored were found for the opponent teams, with decrements of 0.59, 1.43 and 2.04 goals for the same periods, respectively. The influence of situational variables on the scoring performance was only registered in certain game periods. Finally, in the third study, we analyzed the players’ exclusions temporal effects on teams’ scoring performance, for the teams that suffer the exclusion (numerical inferiority) and for the opponents (numerical superiority). Multiple linear regression models were used for the analysis. The results showed significant negative effects on the number of goals scored for the teams with one less player, with decrements of 0.25, 0.40, 0.61, 0.62, and 0.57 goals for the periods within the previous and post one, two, three, four and five minutes of play. For the opponent teams, the results showed positive effects, with increments of the same magnitude in the same game periods. This trend was not affected by match status, game location, quality of opposition, or game period. The scoring increments were smaller than might be expected from a 2-minute numerical playing superiority. Psychological theories such as choking under pressure situations where good performance is expected could contribute to explain this finding. The final chapters of the thesis enumerate the main conclusions and underline the main practical applications that arise from the three studies. Lastly, limitations and future research directions are described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work aims to develop a novel Cross-Entropy (CE) optimization-based fuzzy controller for Unmanned Aerial Monocular Vision-IMU System (UAMVIS) to solve the seeand-avoid problem using its accurate autonomous localization information. The function of this fuzzy controller is regulating the heading of this system to avoid the obstacle, e.g. wall. In the Matlab Simulink-based training stages, the Scaling Factor (SF) is adjusted according to the specified task firstly, and then the Membership Function (MF) is tuned based on the optimized Scaling Factor to further improve the collison avoidance performance. After obtained the optimal SF and MF, 64% of rules has been reduced (from 125 rules to 45 rules), and a large number of real flight tests with a quadcopter have been done. The experimental results show that this approach precisely navigates the system to avoid the obstacle. To our best knowledge, this is the first work to present the optimized fuzzy controller for UAMVIS using Cross-Entropy method in Scaling Factors and Membership Functions optimization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate variability and changes in the frequency of extremes events have a direct impact on crop damages and yield. In a former work of Capa et al. (2013) the crop yield variability has been studied using different reanalyses datasets with the aim of extending the time series of potential yield. The reliability of these time series have been checked using observational data. The influence of the sea surface temperature on the crop yield variability has been studied, finding a relation with El Niño phenomenon. The highest correlation between El Niño and yield was during 1960-1980. This study aims to analyse the dynamical mechanism of El Niño impacts on maize yield in Spain during 1960-1980 by comparison with atmospheric circulation patterns.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a set of new volume scaling relationships specific to Svalbard glaciers, derived from a sample of 60 volume–area pairs. Glacier volumes are computed from ground-penetrating radar (GPR)-retrieved ice thickness measurements, which have been compiled from different sources for this study. The most precise scaling models, in terms of lowest cross-validation errors, are obtained using a multivariate approach where, in addition to glacier area, glacier length and elevation range are also used as predictors. Using this multivariate scaling approach, together with the Randolph Glacier Inventory V3.2 for Svalbard and Jan Mayen, we obtain a regional volume estimate of 6700 ± 835 km3, or 17 ± 2 mm of sea-level equivalent (SLE). This result lies in the mid- to low range of recently published estimates, which show values as varied as 13 and 24 mm SLE. We assess the sensitivity of the scaling exponents to glacier characteristics such as size, aspect ratio and average slope, and find that the volume of steep-slope and cirque-type glaciers is not very sensitive to changes in glacier area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acknowledgements The first author has been supported by a Georg Forster Research Fellowship granted by the Alexander von Humboldt Foundation, Germany

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acknowledgments This paper was developed within the scope of the IRTG 1740/TRP 2011/50151-0, funded by the DFG/FAPESP, and supported by the Government of the Russian Federation (Agreement No. 14.Z50.31.0033 with the Institute of Applied Physics RAS). The first author thanks Dr Roman Ovsyannikov for valuable discussions regarding estimation of the mistake probability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neocortex, a new and rapidly evolving brain structure in mammals, has a similar layered architecture in species over a wide range of brain sizes. Larger brains require longer fibers to communicate between distant cortical areas; the volume of the white matter that contains long axons increases disproportionally faster than the volume of the gray matter that contains cell bodies, dendrites, and axons for local information processing, according to a power law. The theoretical analysis presented here shows how this remarkable anatomical regularity might arise naturally as a consequence of the local uniformity of the cortex and the requirement for compact arrangement of long axonal fibers. The predicted power law with an exponent of 4/3 minus a small correction for the thickness of the cortex accurately accounts for empirical data spanning several orders of magnitude in brain sizes for various mammalian species, including human and nonhuman primates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The generalized master equations (GMEs) that contain multiple time scales have been derived quantum mechanically. The GME method has then been applied to a model of charge migration in proteins that invokes the hole hopping between local amino acid sites driven by the torsional motions of the floppy backbones. This model is then applied to analyze the experimental results for sequence-dependent long-range hole transport in DNA reported by Meggers et al. [Meggers, E., Michel-Beyerle, M. E., & Giese, B. (1998) J. Am. Chem. Soc. 120, 12950–12955]. The model has also been applied to analyze the experimental results of femtosecond dynamics of DNA-mediated electron transfer reported by Zewail and co-workers [Wan, C., Fiebig, T., Kelley, S. O., Treadway, C. R., Barton, J. K. & Zewail, A. H. (1999) Proc. Natl. Acad. Sci. USA 96, 6014–6019]. The initial events in the dynamics of protein folding have begun to attract attention. The GME obtained in this paper will be applicable to this problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The allometric relationships for plant annualized biomass production (“growth”) rates, different measures of body size (dry weight and length), and photosynthetic biomass (or pigment concentration) per plant (or cell) are reported for multicellular and unicellular plants representing three algal phyla; aquatic ferns; aquatic and terrestrial herbaceous dicots; and arborescent monocots, dicots, and conifers. Annualized rates of growth G scale as the 3/4-power of body mass M over 20 orders of magnitude of M (i.e., G ∝ M3/4); plant body length L (i.e., cell length or plant height) scales, on average, as the 1/4-power of M over 22 orders of magnitude of M (i.e., L ∝ M1/4); and photosynthetic biomass Mp scales as the 3/4-power of nonphotosynthetic biomass Mn (i.e., Mp ∝ Mn3/4). Because these scaling relationships are indifferent to phylogenetic affiliation and habitat, they have far-reaching ecological and evolutionary implications (e.g., net primary productivity is predicted to be largely insensitive to community species composition or geological age).