949 resultados para k-Error linear complexity
Resumo:
Human and robots have complementary strengths in performing assembly operations. Humans are very good at perception tasks in unstructured environments. They are able to recognize and locate a part from a box of miscellaneous parts. They are also very good at complex manipulation in tight spaces. The sensory characteristics of the humans, motor abilities, knowledge and skills give the humans the ability to react to unexpected situations and resolve problems quickly. In contrast, robots are very good at pick and place operations and highly repeatable in placement tasks. Robots can perform tasks at high speeds and still maintain precision in their operations. Robots can also operate for long periods of times. Robots are also very good at applying high forces and torques. Typically, robots are used in mass production. Small batch and custom production operations predominantly use manual labor. The high labor cost is making it difficult for small and medium manufacturers to remain cost competitive in high wage markets. These manufactures are mainly involved in small batch and custom production. They need to find a way to reduce the labor cost in assembly operations. Purely robotic cells will not be able to provide them the necessary flexibility. Creating hybrid cells where humans and robots can collaborate in close physical proximities is a potential solution. The underlying idea behind such cells is to decompose assembly operations into tasks such that humans and robots can collaborate by performing sub-tasks that are suitable for them. Realizing hybrid cells that enable effective human and robot collaboration is challenging. This dissertation addresses the following three computational issues involved in developing and utilizing hybrid assembly cells: - We should be able to automatically generate plans to operate hybrid assembly cells to ensure efficient cell operation. This requires generating feasible assembly sequences and instructions for robots and human operators, respectively. Automated planning poses the following two challenges. First, generating operation plans for complex assemblies is challenging. The complexity can come due to the combinatorial explosion caused by the size of the assembly or the complex paths needed to perform the assembly. Second, generating feasible plans requires accounting for robot and human motion constraints. The first objective of the dissertation is to develop the underlying computational foundations for automatically generating plans for the operation of hybrid cells. It addresses both assembly complexity and motion constraints issues. - The collaboration between humans and robots in the assembly cell will only be practical if human safety can be ensured during the assembly tasks that require collaboration between humans and robots. The second objective of the dissertation is to evaluate different options for real-time monitoring of the state of human operator with respect to the robot and develop strategies for taking appropriate measures to ensure human safety when the planned move by the robot may compromise the safety of the human operator. In order to be competitive in the market, the developed solution will have to include considerations about cost without significantly compromising quality. - In the envisioned hybrid cell, we will be relying on human operators to bring the part into the cell. If the human operator makes an error in selecting the part or fails to place it correctly, the robot will be unable to correctly perform the task assigned to it. If the error goes undetected, it can lead to a defective product and inefficiencies in the cell operation. The reason for human error can be either confusion due to poor quality instructions or human operator not paying adequate attention to the instructions. In order to ensure smooth and error-free operation of the cell, we will need to monitor the state of the assembly operations in the cell. The third objective of the dissertation is to identify and track parts in the cell and automatically generate instructions for taking corrective actions if a human operator deviates from the selected plan. Potential corrective actions may involve re-planning if it is possible to continue assembly from the current state. Corrective actions may also involve issuing warning and generating instructions to undo the current task.
Resumo:
We consider the a posteriori error analysis and hp-adaptation strategies for hp-version interior penalty discontinuous Galerkin methods for second-order partial differential equations with nonnegative characteristic form on anisotropically refined computational meshes with anisotropically enriched elemental polynomial degrees. In particular, we exploit duality based hp-error estimates for linear target functionals of the solution and design and implement the corresponding adaptive algorithms to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance. This involves exploiting both local isotropic and anisotropic mesh refinement and isotropic and anisotropic polynomial degree enrichment. The superiority of the proposed algorithm in comparison with standard hp-isotropic mesh refinement algorithms and an h-anisotropic/p-isotropic adaptive procedure is illustrated by a series of numerical experiments.
Resumo:
The proliferation of new mobile communication devices, such as smartphones and tablets, has led to an exponential growth in network traffic. The demand for supporting the fast-growing consumer data rates urges the wireless service providers and researchers to seek a new efficient radio access technology, which is the so-called 5G technology, beyond what current 4G LTE can provide. On the other hand, ubiquitous RFID tags, sensors, actuators, mobile phones and etc. cut across many areas of modern-day living, which offers the ability to measure, infer and understand the environmental indicators. The proliferation of these devices creates the term of the Internet of Things (IoT). For the researchers and engineers in the field of wireless communication, the exploration of new effective techniques to support 5G communication and the IoT becomes an urgent task, which not only leads to fruitful research but also enhance the quality of our everyday life. Massive MIMO, which has shown the great potential in improving the achievable rate with a very large number of antennas, has become a popular candidate. However, the requirement of deploying a large number of antennas at the base station may not be feasible in indoor scenarios. Does there exist a good alternative that can achieve similar system performance to massive MIMO for indoor environment? In this dissertation, we address this question by proposing the time-reversal technique as a counterpart of massive MIMO in indoor scenario with the massive multipath effect. It is well known that radio signals will experience many multipaths due to the reflection from various scatters, especially in indoor environments. The traditional TR waveform is able to create a focusing effect at the intended receiver with very low transmitter complexity in a severe multipath channel. TR's focusing effect is in essence a spatial-temporal resonance effect that brings all the multipaths to arrive at a particular location at a specific moment. We show that by using time-reversal signal processing, with a sufficiently large bandwidth, one can harvest the massive multipaths naturally existing in a rich-scattering environment to form a large number of virtual antennas and achieve the desired massive multipath effect with a single antenna. Further, we explore the optimal bandwidth for TR system to achieve maximal spectral efficiency. Through evaluating the spectral efficiency, the optimal bandwidth for TR system is found determined by the system parameters, e.g., the number of users and backoff factor, instead of the waveform types. Moreover, we investigate the tradeoff between complexity and performance through establishing a generalized relationship between the system performance and waveform quantization in a practical communication system. It is shown that a 4-bit quantized waveforms can be used to achieve the similar bit-error-rate compared to the TR system with perfect precision waveforms. Besides 5G technology, Internet of Things (IoT) is another terminology that recently attracts more and more attention from both academia and industry. In the second part of this dissertation, the heterogeneity issue within the IoT is explored. One of the significant heterogeneity considering the massive amount of devices in the IoT is the device heterogeneity, i.e., the heterogeneous bandwidths and associated radio-frequency (RF) components. The traditional middleware techniques result in the fragmentation of the whole network, hampering the objects interoperability and slowing down the development of a unified reference model for the IoT. We propose a novel TR-based heterogeneous system, which can address the bandwidth heterogeneity and maintain the benefit of TR at the same time. The increase of complexity in the proposed system lies in the digital processing at the access point (AP), instead of at the devices' ends, which can be easily handled with more powerful digital signal processor (DSP). Meanwhile, the complexity of the terminal devices stays low and therefore satisfies the low-complexity and scalability requirement of the IoT. Since there is no middleware in the proposed scheme and the additional physical layer complexity concentrates on the AP side, the proposed heterogeneous TR system better satisfies the low-complexity and energy-efficiency requirement for the terminal devices (TDs) compared with the middleware approach.
Resumo:
Bahadur representation and its applications have attracted a large number of publications and presentations on a wide variety of problems. Mixing dependency is weak enough to describe the dependent structure of random variables, including observations in time series and longitudinal studies. This note proves the Bahadur representation of sample quantiles for strongly mixing random variables (including ½-mixing and Á-mixing) under very weak mixing coe±cients. As application, the asymptotic normality is derived. These results greatly improves those recently reported in literature.
Resumo:
Dissertação de mest. em Engenharia de Sistemas e Computação - Área de Sistemas de Controlo, Faculdade de Ciências e Tecnologia, Univ.do Algarve, 2001
Resumo:
This thesis details the design and applications of a terahertz (THz) frequency comb spectrometer. The spectrometer employs two offset locked Ti:Sapphire femtosecond oscillators with repetition rates of approximately 80 MHz, offset locked at 100 Hz to continuously sample a time delay of 12.5 ns at a maximum time delay resolution of 15.6 fs. These oscillators emit continuous pulse trains, allowing the generation of a THz pulse train by the master, or pump, oscillator and the sampling of this THz pulse train by the slave, or probe, oscillator via the electro-optic effect. Collecting a train of 16 consecutive THz pulses and taking the Fourier transform of this pulse train produces a decade-spanning frequency comb, from 0.25 to 2.5 THz, with a comb tooth width of 5 MHz and a comb tooth spacing of ~80 MHz. This frequency comb is suitable for Doppler-limited rotational spectroscopy of small molecules. Here, the data from 68 individual scans at slightly different pump oscillator repetition rates were combined, producing an interleaved THz frequency comb spectrum, with a maximum interval between comb teeth of 1.4 MHz, enabling THz frequency comb spectroscopy.
The accuracy of the THz frequency comb spectrometer was tested, achieving a root mean square error of 92 kHz measuring selected absorption center frequencies of water vapor at 10 mTorr, and a root mean square error of 150 kHz in measurements of a K-stack of acetonitrile. This accuracy is sufficient for fitting of measured transitions to a model Hamiltonian to generate a predicted spectrum for molecules of interest in the fields of astronomy and physical chemistry. As such, the rotational spectra of methanol and methanol-OD were acquired by the spectrometer. Absorptions from 1.3 THz to 2.0 THz were compared to JPL catalog data for methanol and the spectrometer achieved an RMS error of 402 kHz, improving to 303 kHz when excluding low signal-to-noise absorptions. This level of accuracy compares favorably with the ~100 kHz accuracy achieved by JPL frequency multiplier submillimeter spectrometers. Additionally, the relative intensity performance of the THz frequency comb spectrometer is linear across the entire decade-spanning bandwidth, making it the preferred instrument for recovering lineshapes and taking absolute intensity measurements in the THz region. The data acquired by the Terahertz Frequency Comb Spectrometer for methanol-OD is of comparable accuracy to the methanol data and may be used to refine the fit parameters for the predicted spectrum of methanol-OD.
Resumo:
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.
Resumo:
We propose weakly-constrained stream and block codes with tunable pattern-dependent statistics and demonstrate that the block code capacity at large block sizes is close to the the prediction obtained from a simple Markov model published earlier. We demonstrate the feasibility of the code by presenting original encoding and decoding algorithms with a complexity log-linear in the block size and with modest table memory requirements. We also show that when such codes are used for mitigation of patterning effects in optical fibre communications, a gain of about 0.5dB is possible under realistic conditions, at the expense of small redundancy (≈10%). © 2010 IEEE
Resumo:
Esta investigación midió la percepción del personal asistencial sobre la cultura de seguridad de los pacientes en un hospital de primer nivel de complejidad por medio de un estudio descriptivo de corte transversal. Se utilizó como herramienta de medición la encuesta ‘Hospital Survey on Patient Safety Cultura’ (HSOPSC) de la Agency of Healthcare Research and Quality (AHRQ) versión en español, la cual evalúa doce dimensiones. Los resultados mostraron fortalezas como el aprendizaje organizacional, las mejoras continuas y el apoyo de los administradores para la seguridad del paciente. Las dimensiones clasificadas como oportunidades de mejora fueron la cultura no punitiva, el personal, las transferencias y transiciones y el grado en que la comunicación es abierta. Se concluyó que aunque el personal percibía como positivo el proceso de mejoramiento y apoyo de la administración también sentía que era juzgado si reportaba algún evento adverso.
Resumo:
Las organizaciones y sus entornos son sistemas complejos. Tales sistemas son difíciles de comprender y predecir. Pese a ello, la predicción es una tarea fundamental para la gestión empresarial y para la toma de decisiones que implica siempre un riesgo. Los métodos clásicos de predicción (entre los cuales están: la regresión lineal, la Autoregresive Moving Average y el exponential smoothing) establecen supuestos como la linealidad, la estabilidad para ser matemática y computacionalmente tratables. Por diferentes medios, sin embargo, se han demostrado las limitaciones de tales métodos. Pues bien, en las últimas décadas nuevos métodos de predicción han surgido con el fin de abarcar la complejidad de los sistemas organizacionales y sus entornos, antes que evitarla. Entre ellos, los más promisorios son los métodos de predicción bio-inspirados (ej. redes neuronales, algoritmos genéticos /evolutivos y sistemas inmunes artificiales). Este artículo pretende establecer un estado situacional de las aplicaciones actuales y potenciales de los métodos bio-inspirados de predicción en la administración.
Resumo:
Este trabajo exploratorio estudia al movimiento político Mesa de la Unidad Democrática (MUD), creada con el fin de oponerse la Gobierno socialista existente en venezuela. La crítica que este documento realiza, parte desde el punto de vista de la Ciencia de la Complejidad. Algunos conceptos clave de sistemas complejos han sido utilizados para explicar el funcionamiento y organización de la MUD, esto con el objetivo de generar un diagnóstico integral de los problemas que enfrenta, y evidenciar las nuevas percepciones sobre comportamientos perjudiciales que el partido tiene actualmente. Con el enfoque de la complejidad se pretende ayudar a comprender mejor el contexto que enmarca al partido y, para, finalmente aportar una serie de soluciones a los problemas de cohesión que presen
Resumo:
We generalize the Liapunov convexity theorem's version for vectorial control systems driven by linear ODEs of first-order p = 1 , in any dimension d ∈ N , by including a pointwise state-constraint. More precisely, given a x ‾ ( ⋅ ) ∈ W p , 1 ( [ a , b ] , R d ) solving the convexified p-th order differential inclusion L p x ‾ ( t ) ∈ co { u 0 ( t ) , u 1 ( t ) , … , u m ( t ) } a.e., consider the general problem consisting in finding bang-bang solutions (i.e. L p x ˆ ( t ) ∈ { u 0 ( t ) , u 1 ( t ) , … , u m ( t ) } a.e.) under the same boundary-data, x ˆ ( k ) ( a ) = x ‾ ( k ) ( a ) & x ˆ ( k ) ( b ) = x ‾ ( k ) ( b ) ( k = 0 , 1 , … , p − 1 ); but restricted, moreover, by a pointwise state constraint of the type 〈 x ˆ ( t ) , ω 〉 ≤ 〈 x ‾ ( t ) , ω 〉 ∀ t ∈ [ a , b ] (e.g. ω = ( 1 , 0 , … , 0 ) yielding x ˆ 1 ( t ) ≤ x ‾ 1 ( t ) ). Previous results in the scalar d = 1 case were the pioneering Amar & Cellina paper (dealing with L p x ( ⋅ ) = x ′ ( ⋅ ) ), followed by Cerf & Mariconda results, who solved the general case of linear differential operators L p of order p ≥ 2 with C 0 ( [ a , b ] ) -coefficients. This paper is dedicated to: focus on the missing case p = 1 , i.e. using L p x ( ⋅ ) = x ′ ( ⋅ ) + A ( ⋅ ) x ( ⋅ ) ; generalize the dimension of x ( ⋅ ) , from the scalar case d = 1 to the vectorial d ∈ N case; weaken the coefficients, from continuous to integrable, so that A ( ⋅ ) now becomes a d × d -integrable matrix; and allow the directional vector ω to become a moving AC function ω ( ⋅ ) . Previous vectorial results had constant ω, no matrix (i.e. A ( ⋅ ) ≡ 0 ) and considered: constant control-vertices (Amar & Mariconda) and, more recently, integrable control-vertices (ourselves).
Resumo:
Nesta dissertação estudámos as séries temporais que representam a complexa dinâmica do comportamento. Demos especial atenção às técnicas de dinâmica não linear. As técnicas fornecem-nos uma quantidade de índices quantitativos que servem para descrever as propriedades dinâmicas do sistema. Estes índices têm sido intensivamente usados nos últimos anos em aplicações práticas em Psicologia. Estudámos alguns conceitos básicos de dinâmica não linear, as características dos sistemas caóticos e algumas grandezas que caracterizam os sistemas dinâmicos, que incluem a dimensão fractal, que indica a complexidade de informação contida na série temporal, os expoentes de Lyapunov, que indicam a taxa com que pontos arbitrariamente próximos no espaço de fases da representação do espaço dinâmico, divergem ao longo do tempo, ou a entropia aproximada, que mede o grau de imprevisibilidade de uma série temporal. Esta informação pode então ser usada para compreender, e possivelmente prever, o comportamento. ABSTRACT: ln this thesis we studied the time series that represent the complex dynamic behavior. We focused on techniques of nonlinear dynamics. The techniques provide us a number of quantitative indices used to describe the dynamic properties of the system. These indices have been extensively used in recent years in practical applications in psychology. We studied some basic concepts of nonlinear dynamics, the characteristics of chaotic systems and some quantities that characterize the dynamic systems, including fractal dimension, indicating the complexity of information in the series, the Lyapunov exponents, which indicate the rate at that arbitrarily dose points in phase space representation of a dynamic, vary over time, or the approximate entropy, which measures the degree of unpredictability of a series. This information can then be used to understand and possibly predict the behavior.
Resumo:
Understanding the natural and forced variability of the atmospheric general circulation and its drivers is one of the grand challenges in climate science. It is of paramount importance to understand to what extent the systematic error of climate models affects the processes driving such variability. This is done by performing a set of simulations (ROCK experiments) with an intermediate complexity atmospheric model (SPEEDY), in which the Rocky Mountains orography is increased or decreased to influence the structure of the North Pacific jet stream. For each of these modified-orography experiments, the climatic response to idealized sea surface temperature anomalies of varying intensity in the El Niño Southern Oscillation (ENSO) region is studied. ROCK experiments are characterized by variations in the Pacific jet stream intensity whose extension encompasses the spread of the systematic error found in Coupled Model Intercomparison Project (CMIP6) models. When forced with ENSO-like idealised anomalies, they exhibit a non-negligible sensitivity in the response pattern over the Pacific North American region, indicating that the model mean state can affect the model response to ENSO. It is found that the classical Rossby wave train response to ENSO is more meridionally oriented when the Pacific jet stream is weaker and more zonally oriented with a stronger jet. Rossby wave linear theory suggests that a stronger jet implies a stronger waveguide, which traps Rossby waves at a lower latitude, favouring a zonal propagation of Rossby waves. The shape of the dynamical response to ENSO affects the ENSO impacts on surface temperature and precipitation over Central and North America. A comparison of the SPEEDY results with CMIP6 models suggests a wider applicability of the results to more resources-demanding climate general circulation models (GCMs), opening up to future works focusing on the relationship between Pacific jet misrepresentation and response to external forcing in fully-fledged GCMs.
Resumo:
The cerebral cortex presents self-similarity in a proper interval of spatial scales, a property typical of natural objects exhibiting fractal geometry. Its complexity therefore can be characterized by the value of its fractal dimension (FD). In the computation of this metric, it has usually been employed a frequentist approach to probability, with point estimator methods yielding only the optimal values of the FD. In our study, we aimed at retrieving a more complete evaluation of the FD by utilizing a Bayesian model for the linear regression analysis of the box-counting algorithm. We used T1-weighted MRI data of 86 healthy subjects (age 44.2 ± 17.1 years, mean ± standard deviation, 48% males) in order to gain insights into the confidence of our measure and investigate the relationship between mean Bayesian FD and age. Our approach yielded a stronger and significant (P < .001) correlation between mean Bayesian FD and age as compared to the previous implementation. Thus, our results make us suppose that the Bayesian FD is a more truthful estimation for the fractal dimension of the cerebral cortex compared to the frequentist FD.