884 resultados para Piecewise linear systems with two zones


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A complete census of planetary systems around a volume-limited sample of solar-type stars (FGK dwarfs) in the Solar neighborhood (d a parts per thousand currency signaEuro parts per thousand 15 pc) with uniform sensitivity down to Earth-mass planets within their Habitable Zones out to several AUs would be a major milestone in extrasolar planets astrophysics. This fundamental goal can be achieved with a mission concept such as NEAT-the Nearby Earth Astrometric Telescope. NEAT is designed to carry out space-borne extremely-high-precision astrometric measurements at the 0.05 mu as (1 sigma) accuracy level, sufficient to detect dynamical effects due to orbiting planets of mass even lower than Earth's around the nearest stars. Such a survey mission would provide the actual planetary masses and the full orbital geometry for all the components of the detected planetary systems down to the Earth-mass limit. The NEAT performance limits can be achieved by carrying out differential astrometry between the targets and a set of suitable reference stars in the field. The NEAT instrument design consists of an off-axis parabola single-mirror telescope (D = 1 m), a detector with a large field of view located 40 m away from the telescope and made of 8 small movable CCDs located around a fixed central CCD, and an interferometric calibration system monitoring dynamical Young's fringes originating from metrology fibers located at the primary mirror. The mission profile is driven by the fact that the two main modules of the payload, the telescope and the focal plane, must be located 40 m away leading to the choice of a formation flying option as the reference mission, and of a deployable boom option as an alternative choice. The proposed mission architecture relies on the use of two satellites, of about 700 kg each, operating at L2 for 5 years, flying in formation and offering a capability of more than 20,000 reconfigurations. The two satellites will be launched in a stacked configuration using a Soyuz ST launch vehicle. The NEAT primary science program will encompass an astrometric survey of our 200 closest F-, G- and K-type stellar neighbors, with an average of 50 visits each distributed over the nominal mission duration. The main survey operation will use approximately 70% of the mission lifetime. The remaining 30% of NEAT observing time might be allocated, for example, to improve the characterization of the architecture of selected planetary systems around nearby targets of specific interest (low-mass stars, young stars, etc.) discovered by Gaia, ground-based high-precision radial-velocity surveys, and other programs. With its exquisite, surgical astrometric precision, NEAT holds the promise to provide the first thorough census for Earth-mass planets around stars in the immediate vicinity of our Sun.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study evaluated the five-year clinical performance of ceramic inlays and onlays made with two systems: sintered Duceram (Dentsply-Degussa) and pressable IPS Empress (Ivoclar Vivadent). Eighty-six restorations were placed by a single operator in 35 patients with a median age of 33 years. The restorations were cemented with dual-cured resin cement (Variolink II, Ivoclar Vivadent) and Syntac Classic adhesive under rubber dam. The evaluations were conducted by two independent investigators at baseline, and at one, two, three, and five years using the modified United States Public Health Service (USPHS) criteria. At the five-year recall, 26 patients were evaluated (74.28%), totalling 62 (72.09%) restorations. Four IPS restorations were fractured, two restorations presented secondary caries (one from IPS and one from Duceram), and two restorations showed unacceptable defects at the restoration margin and needed replacement (one restoration from each ceramic system). A general success rate of 87% was recorded. The Fisher exact test revealed no significant difference between Duceram and IPS Empress ceramic systems for all aspects evaluated at different recall appointments (p>0.05). The McNemar chi-square test showed significant differences in relation to marginal discoloration, marginal integrity, and surface texture between the baseline and five-year recall for both systems (p<0.001), with an increased percentage of Bravo scores. However, few Charlie or Delta scores were attributed to these restorations. In conclusion, these two types of ceramic materials demonstrated acceptable clinical performance after five years

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mean majority deficit in a two-tier voting system is a function of the partition of the population. We derive a new square-root rule: For odd-numbered population sizes and equipopulous units the mean majority deficit is maximal when the member size of the units in the partition is close to the square root of the population size. Furthermore, within the partitions into roughly equipopulous units, partitions with small even numbers of units or small even-sized units yield high mean majority deficits. We discuss the implications for the winner-takes-all system in the US Electoral College.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the results of a liquid–liquid equilibrium data correlation for 11 ternary systems which have not been previously fitted using the NRTL model or, when they have, the results presented in the literature are inconsistent with the experimental behavior of the system. These ternary systems include mixtures with one or two partially miscible pairs. During the correlation process, new restrictions were imposed on the values for the NRTL binary parameters to ensure correct prediction of the total or partial miscibility for the binary pairs involved. In addition, topological concepts related to the Gibbs stability test have been applied in order to validate the results in the whole range of compositions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we describe an hybrid algorithm for an even number of processors based on an algorithm for two processors and the Overlapping Partition Method for tridiagonal systems. Moreover, we compare this hybrid method with the Partition Wang’s method in a BSP computer. Finally, we compare the theoretical computation cost of both methods for a Cray T3D computer, using the cost model that BSP model provides.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is about the study of relationships between experimental dynamical systems. The basic approach is to fit radial basis function maps between time delay embeddings of manifolds. We have shown that under certain conditions these maps are generically diffeomorphisms, and can be analysed to determine whether or not the manifolds in question are diffeomorphically related to each other. If not, a study of the distribution of errors may provide information about the lack of equivalence between the two. The method has applications wherever two or more sensors are used to measure a single system, or where a single sensor can respond on more than one time scale: their respective time series can be tested to determine whether or not they are coupled, and to what degree. One application which we have explored is the determination of a minimum embedding dimension for dynamical system reconstruction. In this special case the diffeomorphism in question is closely related to the predictor for the time series itself. Linear transformations of delay embedded manifolds can also be shown to have nonlinear inverses under the right conditions, and we have used radial basis functions to approximate these inverse maps in a variety of contexts. This method is particularly useful when the linear transformation corresponds to the delay embedding of a finite impulse response filtered time series. One application of fitting an inverse to this linear map is the detection of periodic orbits in chaotic attractors, using suitably tuned filters. This method has also been used to separate signals with known bandwidths from deterministic noise, by tuning a filter to stop the signal and then recovering the chaos with the nonlinear inverse. The method may have applications to the cancellation of noise generated by mechanical or electrical systems. In the course of this research a sophisticated piece of software has been developed. The program allows the construction of a hierarchy of delay embeddings from scalar and multi-valued time series. The embedded objects can be analysed graphically, and radial basis function maps can be fitted between them asynchronously, in parallel, on a multi-processor machine. In addition to a graphical user interface, the program can be driven by a batch mode command language, incorporating the concept of parallel and sequential instruction groups and enabling complex sequences of experiments to be performed in parallel in a resource-efficient manner.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This special issue on ‘Science for the management of subtropical embayments: examples from Shark Bay and Florida Bay’ is a valuable compilation of individual research outcomes from Florida Bay and Shark Bay from the past decade and addresses gaps in our scientific knowledge base in Shark Bay especially. Yet the compilation also demonstrates excellent research that is poorly integrated, and driven by interests and issues that do not necessarily lead to a more integrated stewardship of the marine natural values of either Shark Bay or Florida Bay. Here we describe the status of our current knowledge, introduce the valuable extension of the current knowledge through the papers in this issue and then suggest some future directions. For management, there is a need for a multidisciplinary international science program that focusses research on the ecological resilience of Shark Bay and Florida Bay, the effect of interactions between physical environmental drivers and biological control through behavioural and trophic interactions, and all under increased anthropogenic stressors. Shark Bay offers a ‘pristine template’ for this scale of study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A planar reconfigurable linear (also rectilinear) rigid-body motion linkage (RLRBML) with two operation modes, that is, linear rigid-body motion mode and lockup mode, is presented using only R (revolute) joints. The RLRBML does not require disassembly and external intervention to implement multi-task requirements. It is created via combining a Robert’s linkage and a double parallelogram linkage (with equal lengths of rocker links) arranged in parallel, which can convert a limited circular motion to a linear rigid-body motion without any reference guide way. This linear rigid-body motion is achieved since the double parallelogram linkage can guarantee the translation of the motion stage, and Robert’s linkage ensures the approximate straight line motion of its pivot joint connecting to the double parallelogram linkage. This novel RLRBML is under the linear rigid-body motion mode if the four rocker links in the double parallelogram linkage are not parallel. The motion stage is in the lockup mode if all of the four rocker links in the double parallelogram linkage are kept parallel in a tilted position (but the inner/outer two rocker links are still parallel). In the lockup mode, the motion stage of the RLRBML is prohibited from moving even under power off, but the double parallelogram linkage is still moveable for its own rotation application. It is noted that further RLRBMLs can be obtained from the above RLRBML by replacing Robert’s linkage with any other straight line motion linkage (such as Watt’s linkage). Additionally, a compact RLRBML and two single-mode linear rigid-body motion linkages are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Articular cartilage damage is a persistent and increasing problem with the aging population, and treatments to achieve biological repair or restoration remain a challenge. Cartilage tissue engineering approaches have been investigated for over 20 years, but have yet to achieve the consistency and effectiveness for widespread clinical use. One of the potential reasons for this is that the engineered tissues do not have or establish the normal zonal organization of cells and extracellular matrix that appears critical for normal tissue function. A number of approaches are being taken currently to engineer tissue that more closely mimics the organization of native articular cartilage. This review focuses on the zonal organization of native articular cartilage, strategies being used to develop such organization, the reorganization that occurs after culture or implantation, and future prospects for the tissue engineering of articular cartilage with biomimetic zones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the robust H∞ control for Takagi-Sugeno (T-S) fuzzy systems with interval time-varying delay. By employing a new and tighter integral inequality and constructing an appropriate type of Lyapunov functional, delay-dependent stability criteria are derived for the control problem. Because neither any model transformation nor free weighting matrices are employed in our theoretical derivation, the developed stability criteria significantly improve and simplify the existing stability conditions. Also, the maximum allowable upper delay bound and controller feedback gains can be obtained simultaneously from the developed approach by solving a constrained convex optimization problem. Numerical examples are given to demonstrate the effectiveness of the proposed methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Various load compensation schemes proposed in literature assume that voltage source at point of common coupling (PCC) is stiff. In practice, however, the load is remote from a distribution substation and is supplied by a feeder. In the presence of feeder impedance, the PWM inverter switchings distort both the PCC voltage and the source currents. In this paper load compensation with such a non-stiff source is considered. A switching control of the voltage source inverter (VSI) based on state feedback is used for load compensation with non-stiff source. The design of the state feedback controller requires careful considerations in choosing a gain matrix and in the generation of reference quantities. These aspects are considered in this paper. Detailed simulation and experimental results are given to support the control design.