973 resultados para low-dimensional system
Resumo:
In February 1962, Hamburg experienced its most catastrophic storm surge event of the 20th century. This paper analyses the event using the Twentieth Century Reanalysis (20CR) dataset. Responsible for the major flood was a strong low pressure system centred over Scandinavia that was associated with strong north-westerly winds towards the German North Sea coast – the ideal storm surge situation for the Elbe estuary. A comparison of the 20CR dataset with observational data proves the applicability of the reanalysis data for this extreme event.
Resumo:
When genetic constraints restrict phenotypic evolution, diversification can be predicted to evolve along so-called lines of least resistance. To address the importance of such constraints and their resolution, studies of parallel phenotypic divergence that differ in their age are valuable. Here, we investigate the parapatric evolution of six lake and stream threespine stickleback systems from Iceland and Switzerland, ranging in age from a few decades to several millennia. Using phenotypic data, we test for parallelism in ecotypic divergence between parapatric lake and stream populations and compare the observed patterns to an ancestral-like marine population. We find strong and consistent phenotypic divergence, both among lake and stream populations and between our freshwater populations and the marine population. Interestingly, ecotypic divergence in low-dimensional phenotype space (i.e. single traits) is rapid and seems to be often completed within 100 years. Yet, the dimensionality of ecotypic divergence was highest in our oldest systems and only there parallel evolution of unrelated ecotypes was strong enough to overwrite phylogenetic contingency. Moreover, the dimensionality of divergence in different systems varies between trait complexes, suggesting different constraints and evolutionary pathways to their resolution among freshwater systems.
Resumo:
Determining the role of different precipitation periods for peak discharge generation is crucial for both projecting future changes in flood probability and for short- and medium-range flood forecasting. In this study, catchment-averaged daily precipitation time series are analyzed prior to annual peak discharge events (floods) in Switzerland. The high number of floods considered – more than 4000 events from 101 catchments have been analyzed – allows to derive significant information about the role of antecedent precipitation for peak discharge generation. Based on the analysis of precipitation times series, a new separation of flood-related precipitation periods is proposed: (i) the period 0 to 1 day before flood days, when the maximum flood-triggering precipitation rates are generally observed, (ii) the period 2 to 3 days before flood days, when longer-lasting synoptic situations generate "significantly higher than normal" precipitation amounts, and (iii) the period from 4 days to 1 month before flood days when previous wet episodes may have already preconditioned the catchment. The novelty of this study lies in the separation of antecedent precipitation into the precursor antecedent precipitation (4 days before floods or earlier, called PRE-AP) and the short range precipitation (0 to 3 days before floods, a period when precipitation is often driven by one persistent weather situation like e.g., a stationary low-pressure system). A precise separation of "antecedent" and "peak-triggering" precipitation is not attempted. Instead, the strict definition of antecedent precipitation periods permits a direct comparison of all catchments. The precipitation accumulating 0 to 3 days before an event is the most relevant for floods in Switzerland. PRE-AP precipitation has only a weak and region-specific influence on flood probability. Floods were significantly more frequent after wet PRE-AP periods only in the Jura Mountains, in the western and eastern Swiss plateau, and at the outlet of large lakes. As a general rule, wet PRE-AP periods enhance the flood probability in catchments with gentle topography, high infiltration rates, and large storage capacity (karstic cavities, deep soils, large reservoirs). In contrast, floods were significantly less frequent after wet PRE-AP periods in glacial catchments because of reduced melt. For the majority of catchments however, no significant correlation between precipitation amounts and flood occurrences is found when the last 3 days before floods are omitted in the precipitation amounts. Moreover, the PRE-AP was not higher for extreme floods than for annual floods with a high frequency and was very close to climatology for all floods. The fact that floods are not significantly more frequent nor more intense after wet PRE-AP is a clear indicator of a short discharge memory of Pre-Alpine, Alpine and South Alpine Swiss catchments. Our study poses the question whether the impact of long-term precursory precipitation for floods in such catchments is not overestimated in the general perception. The results suggest that the consideration of a 3–4 days precipitation period should be sufficient to represent (understand, reconstruct, model, project) Swiss Alpine floods.
Resumo:
For probability distributions on ℝq, a detailed study of the breakdown properties of some multivariate M-functionals related to Tyler's [Ann. Statist. 15 (1987) 234] ‘distribution-free’ M-functional of scatter is given. These include a symmetrized version of Tyler's M-functional of scatter, and the multivariate t M-functionals of location and scatter. It is shown that for ‘smooth’ distributions, the (contamination) breakdown point of Tyler's M-functional of scatter and of its symmetrized version are 1/q and inline image, respectively. For the multivariate t M-functional which arises from the maximum likelihood estimate for the parameters of an elliptical t distribution on ν ≥ 1 degrees of freedom the breakdown point at smooth distributions is 1/(q + ν). Breakdown points are also obtained for general distributions, including empirical distributions. Finally, the sources of breakdown are investigated. It turns out that breakdown can only be caused by contaminating distributions that are concentrated near low-dimensional subspaces.
Resumo:
Public health surveillance programs for vaccine preventable diseases (VPD) need functional quality assurance (QA) in order to operate with high quality activities to prevent preventable communicable diseases from spreading in the community. Having a functional QA plan can assure the performance and quality of a program without putting excessive stress on the resources. A functional QA plan acts as a check on the quality of day-to-day activities performed by the VPD surveillance program while also providing data that would be useful for evaluating the program. This study developed a QA plan that involves collection, collation, analysis and reporting of information based on standardized (predetermined) formats and indicators as an integral part of routine work for the vaccine preventable disease surveillance program at the City of Houston Department of Health and Human Services. The QA plan also provides sampling and analysis plans for assessing various QA indicators, as well as recommendations to the Houston Department of Health and Humans Services for implementation of the QA plan. The QA plan developed for VPD surveillance in the City of Houston is intended to be a low cost system that could serve as a template for QA plans as part of other public health programs not only in the city or the nation, but could be adapted for use anywhere across the globe. Having a QA plan for VPD surveillance in the City of Houston would serve well for the funding agencies like the CDC by assuring that the resources are being expended efficiently, while achieving the real goal of positively impacting the health and lives of the recipient/target population. ^
Resumo:
Changes in paleoclimate and paleoproductivity patterns have been identified by analysing, in conjunction with other available proxy data, the coccolithophore assemblages from core MD03-2699, located in the Portuguese margin in the time interval from the Marine Isotope Stage (MIS) 13/14 boundary to MIS 9 (535 to 300 ka). During the Mid-Brunhes event, the assemblages associated with the eccentricity minima are characterised by higher nannoplankton accumulation rate (NAR) values and by the blooming of the opportunistic genus Gephyrocapsa. Changes in coccolithophore abundance are also related to glacial-interglacial cycles. Higher NAR and numbers of coccoliths/g mainly occurred during the interglacial periods, while these values decreased during the glacial periods. Superimposed on the glacial/interglacial cycles, climatic and paleoceanographic variability has been observed on precessional timescales. The structure of the assemblages highlights the prevailing long-term influence of the Portugal (PC) and Iberian Poleward (IPC) Currents, following half and full precession harmonics, related to the migration of the Azores High (AH) Pressure System. Small Gephyrocapsa and Coccolithus pelagicus braarudii are regarded as good indicators for periods of prevailing PC influence. Gephyrocapsa caribbeanica, Syracosphaera spp., Rhabdosphaera spp. and Umbilicosphaera sibogae denote periods of IPC influence. Our data also highlights the increased percentages of Coccolithus pelagicus pelagicus during the occurrence of episodes of very cold and low salinity surface water, probably related to abrupt climatic events and millennial-scale oscillations of the AH/Icelandic Low (IL) System.
Resumo:
Let G be a reductive complex Lie group acting holomorphically on normal Stein spaces X and Y, which are locally G-biholomorphic over a common categorical quotient Q. When is there a global G-biholomorphism X → Y? If the actions of G on X and Y are what we, with justification, call generic, we prove that the obstruction to solving this local-to-global problem is topological and provide sufficient conditions for it to vanish. Our main tool is the equivariant version of Grauert's Oka principle due to Heinzner and Kutzschebauch. We prove that X and Y are G-biholomorphic if X is K-contractible, where K is a maximal compact subgroup of G, or if X and Y are smooth and there is a G-diffeomorphism ψ : X → Y over Q, which is holomorphic when restricted to each fibre of the quotient map X → Q. We prove a similar theorem when ψ is only a G-homeomorphism, but with an assumption about its action on G-finite functions. When G is abelian, we obtain stronger theorems. Our results can be interpreted as instances of the Oka principle for sections of the sheaf of G-biholomorphisms from X to Y over Q. This sheaf can be badly singular, even for a low-dimensional representation of SL2(ℂ). Our work is in part motivated by the linearisation problem for actions on ℂn. It follows from one of our main results that a holomorphic G-action on ℂn, which is locally G-biholomorphic over a common quotient to a generic linear action, is linearisable.
Resumo:
The Self-OrganizingMap (SOM) is a neural network model that performs an ordered projection of a high dimensional input space in a low-dimensional topological structure. The process in which such mapping is formed is defined by the SOM algorithm, which is a competitive, unsupervised and nonparametric method, since it does not make any assumption about the input data distribution. The feature maps provided by this algorithm have been successfully applied for vector quantization, clustering and high dimensional data visualization processes. However, the initialization of the network topology and the selection of the SOM training parameters are two difficult tasks caused by the unknown distribution of the input signals. A misconfiguration of these parameters can generate a feature map of low-quality, so it is necessary to have some measure of the degree of adaptation of the SOM network to the input data model. The topologypreservation is the most common concept used to implement this measure. Several qualitative and quantitative methods have been proposed for measuring the degree of SOM topologypreservation, particularly using Kohonen's model. In this work, two methods for measuring the topologypreservation of the Growing Cell Structures (GCSs) model are proposed: the topographic function and the topology preserving map
Resumo:
Se desarrollan varias técnicas basadas en descomposición ortogonal propia (DOP) local y proyección de tipo Galerkin para acelerar la integración numérica de problemas de evolución, de tipo parabólico, no lineales. Las ideas y métodos que se presentan conllevan un nuevo enfoque para la modelización de tipo DOP, que combina intervalos temporales cortos en que se usa un esquema numérico estándard con otros intervalos temporales en que se utilizan los sistemas de tipo Galerkin que resultan de proyectar las ecuaciones de evolución sobre la variedad lineal generada por los modos DOP, obtenidos a partir de instantáneas calculadas en los intervalos donde actúa el código numérico. La variedad DOP se construye completamente en el primer intervalo, pero solamente se actualiza en los demás intervalos según las dinámicas de la solución, aumentando de este modo la eficiencia del modelo de orden reducido resultante. Además, se aprovechan algunas propiedades asociadas a la dependencia débil de los modos DOP tanto en la variable temporal como en los posibles parámetros de que pueda depender el problema. De esta forma, se aumentan la flexibilidad y la eficiencia computacional del proceso. La aplicación de los métodos resultantes es muy prometedora, tanto en la simulación de transitorios en flujos laminares como en la construcción de diagramas de bifurcación en sistemas dependientes de parámetros. Las ideas y los algoritmos desarrollados en la tesis se ilustran en dos problemas test, la ecuación unidimensional compleja de Ginzburg-Landau y el problema bidimensional no estacionario de la cavidad. Abstract Various ideas and methods involving local proper orthogonal decomposition (POD) and Galerkin projection are presented aiming at accelerating the numerical integration of nonlinear time dependent parabolic problems. The proposed methods come from a new approach to the POD-based model reduction procedures, which combines short runs with a given numerical solver and a reduced order model constructed by expanding the solution of the problem into appropriate POD modes, which span a POD manifold, and Galerkin projecting some evolution equations onto that linear manifold. The POD manifold is completely constructed from the outset, but only updated as time proceeds according to the dynamics, which yields an adaptive and flexible procedure. In addition, some properties concerning the weak dependence of the POD modes on time and possible parameters in the problem are exploited in order to increase the flexibility and efficiency of the low dimensional model computation. Application of the developed techniques to the approximation of transients in laminar fluid flows and the simulation of attractors in bifurcation problems shows very promising results. The test problems considered to illustrate the various ideas and check the performance of the algorithms are the onedimensional complex Ginzburg-Landau equation and the two-dimensional unsteady liddriven cavity problem.
Resumo:
El poder disponer de la instrumentación y los equipos electrónicos resulta vital en el diseño de circuitos analógicos. Permiten realizar las pruebas necesarias y el estudio para el buen funcionamiento de estos circuitos. Los equipos se pueden diferenciar en instrumentos de excitación, los que proporcionan las señales al circuito, y en instrumentos de medida, los que miden las señales generadas por el circuito. Estos equipos sirven de gran ayuda pero a su vez tienen un precio elevado lo que impide en muchos casos disponer de ellos. Por esta principal desventaja, se hace necesario conseguir un dispositivo de bajo coste que sustituya de alguna manera a los equipos reales. Si el instrumento es de medida, este sistema de bajo coste puede ser implementado mediante un equipo hardware encargado de adquirir los datos y una aplicación ejecutándose en un ordenador donde analizarlos y presentarlos en la pantalla. En el caso de que el instrumento sea de excitación, el único cometido del sistema hardware es el de proporcionar las señales cuya configuración ha enviado el ordenador. En un equipo real, es el propio equipo el que debe realizar todas esas acciones: adquisición, procesamiento y presentación de los datos. Además, la dificultad de realizar modificaciones o ampliaciones de las funcionalidades en un instrumento tradicional con respecto a una aplicación de queda patente. Debido a que un instrumento tradicional es un sistema cerrado y uno cuya configuración o procesamiento de datos es hecho por una aplicación, algunas de las modificaciones serían realizables modificando simplemente el software del programa de control, por lo que el coste de las modificaciones sería menor. En este proyecto se pretende implementar un sistema hardware que tenga las características y realice las funciones del equipamiento real que se pueda encontrar en un laboratorio de electrónica. También el desarrollo de una aplicación encargada del control y el análisis de las señales adquiridas, cuya interfaz gráfica se asemeje a la de los equipos reales para facilitar su uso. ABSTRACT. The instrumentation and electronic equipment are vital for the design of analogue circuits. They enable to perform the necessary testing and study for the proper functioning of these circuits. The devices can be classified into the following categories: excitation instruments, which transmit the signals to the circuit, and measuring instruments, those in charge of measuring the signals produced by the circuit. This equipment is considerably helpful, however, its high price often makes it hardly accessible. For this reason, low price equipment is needed in order to replace real devices. If the instrument is measuring, this low cost system can be implemented by hardware equipment to acquire the data and running on a computer where analyzing and present on the screen application. In case of an excitation the instrument, the only task of the hardware system is to provide signals which sent the computer configuration. In a real instrument, is the instrument itself that must perform all these actions: acquisition, processing and presentation of data. Moreover, the difficulty of making changes or additions to the features in traditional devices with respect to an application running on a computer is evident. This is due to the fact that a traditional instrument is a closed system and its configuration or data processing is made by an application. Therefore, certain changes can be made just by modifying the control program software. Consequently, the cost of these modifications is lower. This project aims to implement a hardware system with the same features and functions of any real device, available in an electronics laboratory. Besides, it aims to develop an application for the monitoring and analysis of acquired signals. This application is provided with a graphic interface resembling those of real devices in order to facilitate its use.
Resumo:
The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. In robotics a similar role has been played by modules that fit point cloud data to the superquadric family of shapes and its various extensions. We developed a model of shape tuning in AIP based on cosine tuning to superquadric parameters. However, the model did not fit the data well, and we also found that it was difficult to accurately reproduce these parameters using neural networks with the appropriate inputs (modelled on the caudal intraparietal area, CIP). The latter difficulty was related to the fact that there are large discontinuities in the superquadric parameters between very similar shapes. To address these limitations we adopted an alternative shape parameterization based on an Isomap nonlinear dimension reduction. The Isomap was built using gradients and curvatures of object surface depth. This alternative parameterization was low-dimensional (like superquadrics), but data-driven (similar to an alternative clustering approach that is also sometimes used in robotics) and lacked large discontinuities. Isomaps with 16 or more dimensions reproduced the AIP data fairly well. Moreover, we found that the Isomap parameters could be approximated from CIP-like input much more accurately than the superquadric parameters. We conclude that Isomaps, or perhaps alternative dimension reductions of CIP signals, provide a promising model of AIP tuning. We have now started to integrate our model with a robot hand, to explore the efficacy of Isomap shape reductions in grasp planning. Future work will consider dynamics of spike responses and integration with related visual and motor area models.
Resumo:
Autonomous landing is a challenging and important technology for both military and civilian applications of Unmanned Aerial Vehicles (UAVs). In this paper, we present a novel online adaptive visual tracking algorithm for UAVs to land on an arbitrary field (that can be used as the helipad) autonomously at real-time frame rates of more than twenty frames per second. The integration of low-dimensional subspace representation method, online incremental learning approach and hierarchical tracking strategy allows the autolanding task to overcome the problems generated by the challenging situations such as significant appearance change, variant surrounding illumination, partial helipad occlusion, rapid pose variation, onboard mechanical vibration (no video stabilization), low computational capacity and delayed information communication between UAV and Ground Control Station (GCS). The tracking performance of this presented algorithm is evaluated with aerial images from real autolanding flights using manually- labelled ground truth database. The evaluation results show that this new algorithm is highly robust to track the helipad and accurate enough for closing the vision-based control loop.
Resumo:
Autonomous landing is a challenging and important technology for both military and civilian applications of Unmanned Aerial Vehicles (UAVs). In this paper, we present a novel online adaptive visual tracking algorithm for UAVs to land on an arbitrary field (that can be used as the helipad) autonomously at real-time frame rates of more than twenty frames per second. The integration of low-dimensional subspace representation method, online incremental learning approach and hierarchical tracking strategy allows the autolanding task to overcome the problems generated by the challenging situations such as significant appearance change, variant surrounding illumination, partial helipad occlusion, rapid pose variation, onboard mechanical vibration (no video stabilization), low computational capacity and delayed information communication between UAV and Ground Control Station (GCS). The tracking performance of this presented algorithm is evaluated with aerial images from real autolanding flights using manually- labelled ground truth database. The evaluation results show that this new algorithm is highly robust to track the helipad and accurate enough for closing the vision-based control loop.
Resumo:
The 8-dimensional Luttinger–Kohn–Pikus–Bir Hamiltonian matrix may be made up of four 4-dimensional blocks. A 4-band Hamiltonian is presented, obtained from making the non-diagonal blocks zero. The parameters of the new Hamiltonian are adjusted to fit the calculated effective masses and strained QD bandgap with the measured ones. The 4-dimensional Hamiltonian thus obtained agrees well with measured quantum efficiency of a quantum dot intermediate band solar cell and the full absorption spectrum can be calculated in about two hours using Mathematica© and a notebook. This is a hundred times faster than with the commonly-used 8-band Hamiltonian and is considered suitable for helping design engineers in the development of nanostructured solar cells.
Resumo:
Este trabalho apresenta e discute os resultados de um estudo amplo e aprofundado sobre os principais parâmetros operacionais da flotação por ar dissolvido, utilizada no pós-tratamento de efluentes de um reator anaeróbio de leito expandido (RALEx), tratando 10 m3/hora de esgoto sanitário. Foram realizados preliminarmente ensaios utilizando o flotateste, unidade de flotação em escala de laboratório, para identificar as melhores dosagens de coagulante (cloreto férrico), o polímero mais adequado, dentre os 26 testados, e sua respectiva dosagem, o pH de coagulação adequado, o tempo (Tf) e o gradiente de velocidade (Gf) de floculação mais apropriados e a quantidade de ar (S*) requerida. Para obtenção das condições operacionais adequadas para a unidade piloto de flotação, os valores de Tf e de Gf foram variados de zero a 24 minutos e de 40 a 100 s-1, respectivamente. As concentrações de cloreto férrico e de polímero sintético variaram de 15 a 92 mg/L e de 0,25 a 7,0 mg/L, respectivamente. S* variou de 2.85 a 28.5 gramas de ar por metro cúbico de efluente e a taxa de aplicação superficial na unidade de flotação abrangeu de 180 a 250 m3/m2/d. O desempenho da flotação durante a partida do reator anaeróbio também foi investigado. O uso de 50 mg/L de cloreto férrico, de Tf igual a 20 min e Gf de 80 s-1, de S* de 19,7 g de ar por m3 de efluente e taxa de 180 m3/m2/d produziu excelentes resultados nos ensaios com a instalação piloto de flotação, com elevadas remoções de carga de DQO (80,6%), de fósforo total (90,1%) e de sólidos suspensos totais (92,1%) e com turbidez entre 1,6 e 15,4 uT e residuais de ferro de 0,5 mg/L, com remoção estimada, na forma de lodo, de 77 gramas de SST por m3 de efluente tratado. Nestas mesmas condições, no sistema RALEx+FAD, foram observadas remoções globais de 91,6% de carga de DQO, de 90,1% de carga de fósforo e de 96,6% de carga de SST. O emprego da flotação por ar dissolvido (FAD) mostrou-se alternativa bastante atraente para o pós-tratamento de efluentes de reatores anaeróbios. Se a coagulação estiver bem ajustada, o sistema composto de reator anaeróbio seguido de unidade de flotação consegue alcançar excelente remoção de matéria orgânica, redução significativa da concentração de fósforo e de sólidos suspensos, além de precipitação dos sulfetos dissolvidos, gerados no reator anaeróbio. Bons resultados foram alcançados mesmo quando o reator RALEx produziu efluentes de baixa qualidade durante seu período de partida. Nesse período, o sistema de flotação atuou como barreira eficaz, evitando a emissão de efluente de baixa qualidade do sistema.