942 resultados para Nonlinear static analysis
Resumo:
One major drawback of coherent optical orthogonal frequency-division multiplexing (CO-OFDM) that hitherto remains unsolved is its vulnerability to nonlinear fiber effects due to its high peak-to-average power ratio. Several digital signal processing techniques have been investigated for the compensation of fiber nonlinearities, e.g., digital back-propagation, nonlinear pre- and post-compensation and nonlinear equalizers (NLEs) based on the inverse Volterra-series transfer function (IVSTF). Alternatively, nonlinearities can be mitigated using nonlinear decision classifiers such as artificial neural networks (ANNs) based on a multilayer perceptron. In this paper, ANN-NLE is presented for a 16QAM CO-OFDM system. The capability of the proposed approach to compensate the fiber nonlinearities is numerically demonstrated for up to 100-Gb/s and over 1000km and compared to the benchmark IVSTF-NLE. Results show that in terms of Q-factor, for 100-Gb/s at 1000km of transmission, ANN-NLE outperforms linear equalization and IVSTF-NLE by 3.2dB and 1dB, respectively.
Resumo:
Small errors proved catastrophic. Our purpose to remark that a very small cause which escapes our notice determined a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. Small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. When dealing with any kind of electrical device specification, it is important to note that there exists a pair of test conditions that define a test: the forcing function and the limit. Forcing functions define the external operating constraints placed upon the device tested. The actual test defines how well the device responds to these constraints. Forcing inputs to threshold for example, represents the most difficult testing because this put those inputs as close as possible to the actual switching critical points and guarantees that the device will meet the Input-Output specifications. ^ Prediction becomes impossible by classical analytical analysis bounded by Newton and Euclides. We have found that non linear dynamics characteristics is the natural state of being in all circuits and devices. Opportunities exist for effective error detection in a nonlinear dynamics and chaos environment. ^ Nowadays there are a set of linear limits established around every aspect of a digital or analog circuits out of which devices are consider bad after failing the test. Deterministic chaos circuit is a fact not a possibility as it has been revived by our Ph.D. research. In practice for linear standard informational methodologies, this chaotic data product is usually undesirable and we are educated to be interested in obtaining a more regular stream of output data. ^ This Ph.D. research explored the possibilities of taking the foundation of a very well known simulation and modeling methodology, introducing nonlinear dynamics and chaos precepts, to produce a new error detector instrument able to put together streams of data scattered in space and time. Therefore, mastering deterministic chaos and changing the bad reputation of chaotic data as a potential risk for practical system status determination. ^
Resumo:
INTRODUCTION: Upper airway measurement can be important for the diagnosis of breathing disorders. Acoustic reflection (AR) is an accepted tool for studying the airway. Our objective was to investigate the differences between cone-beam computed tomography (CBCT) and AR in calculating airway volumes and areas. METHODS: Subjects with prescribed CBCT images as part of their records were also asked to have AR performed. A total of 59 subjects (mean age, 15 ± 3.8 years) had their upper airway (5 areas) measured from CBCT images, acoustic rhinometry, and acoustic pharyngometry. Volumes and minimal cross-sectional areas were extracted and compared with software. RESULTS: Intraclass correlation on 20 randomly selected subjects, remeasured 2 weeks apart, showed high reliability (r >0.77). Means of total nasal volume were significantly different between the 2 methods (P = 0.035), but anterior nasal volume and minimal cross-sectional area showed no differences (P = 0.532 and P = 0.066, respectively). Pharyngeal volume showed significant differences (P = 0.01) with high correlation (r = 0.755), whereas pharyngeal minimal cross-sectional area showed no differences (P = 0.109). The pharyngeal volume difference may not be considered clinically significant, since it is 758 mm3 for measurements showing means of 11,000 ± 4000 mm3. CONCLUSIONS: CBCT is an accurate method for measuring anterior nasal volume, nasal minimal cross-sectional area, pharyngeal volume, and pharyngeal minimal cross-sectional area.
Resumo:
[ES]This paper describes an analysis performed for facial description in static images and video streams. The still image context is first analyzed in order to decide the optimal classifier configuration for each problem: gender recognition, race classification, and glasses and moustache presence. These results are later applied to significant samples which are automatically extracted in real-time from video streams achieving promising results in the facial description of 70 individuals by means of gender, race and the presence of glasses and moustache.
Resumo:
With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.
Resumo:
In this work the split-field finite-difference time-domain method (SF-FDTD) has been extended for the analysis of two-dimensionally periodic structures with third-order nonlinear media. The accuracy of the method is verified by comparisons with the nonlinear Fourier Modal Method (FMM). Once the formalism has been validated, examples of one- and two-dimensional nonlinear gratings are analysed. Regarding the 2D case, the shifting in resonant waveguides is corroborated. Here, not only the scalar Kerr effect is considered, the tensorial nature of the third-order nonlinear susceptibility is also included. The consideration of nonlinear materials in this kind of devices permits to design tunable devices such as variable band filters. However, the third-order nonlinear susceptibility is usually small and high intensities are needed in order to trigger the nonlinear effect. Here, a one-dimensional CBG is analysed in both linear and nonlinear regime and the shifting of the resonance peaks in both TE and TM are achieved numerically. The application of a numerical method based on the finite- difference time-domain method permits to analyse this issue from the time domain, thus bistability curves are also computed by means of the numerical method. These curves show how the nonlinear effect modifies the properties of the structure as a function of variable input pump field. When taking the nonlinear behaviour into account, the estimation of the electric field components becomes more challenging. In this paper, we present a set of acceleration strategies based on parallel software and hardware solutions.
Resumo:
This paper compares the performance of the complex nonlinear least squares algorithm implemented in the LEVM/LEVMW software with the performance of a genetic algorithm in the characterization of an electrical impedance of known topology. The effect of the number of measured frequency points and of measurement uncertainty on the estimation of circuit parameters is presented. The analysis is performed on the equivalent circuit impedance of a humidity sensor.
Resumo:
A composite is a material made out of two or more constituents (phases) combined together in order to achieve desirable mechanical or thermal properties. Such innovative materials have been widely used in a large variety of engineering fields in the past decades. The design of a composite structure requires the resolution of a multiscale problem that involves a macroscale (i.e. the structural scale) and a microscale. The latter plays a crucial role in the determination of the material behavior at the macroscale, especially when dealing with constituents characterized by nonlinearities. For this reason, numerical tools are required in order to design composite structures by taking into account of their microstructure. These tools need to provide an accurate yet efficient solution in terms of time and memory requirements, due to the large number of internal variables of the problem. This issue is addressed by different methods that overcome this problem by reducing the number of internal variables. Within this framework, this thesis focuses on the development of a new homogenization technique named Mixed TFA (MxTFA) in order to solve the homogenization problem for nonlinear composites. This technique is based on a mixed-stress variational approach involving self-equilibrated stresses and plastic multiplier as independent variables on the Reference Volume Element (RVE). The MxTFA is developed for the case of elastoplasticity and viscoplasticity, and it is implemented into a multiscale analysis for nonlinear composites. Numerical results show the efficiency of the presented techniques, both at microscale and at macroscale level.
Resumo:
The aim of this study was to evaluate by photoelastic analysis stress distribution on short and long implants of two dental implant systems with 2-unit implant-supported fixed partial prostheses of 8 mm and 13 mm heights. Sixteen photoelastic models were divided into 4 groups: I: long implant (5 × 11 mm) (Neodent), II: long implant (5 × 11 mm) (Bicon), III: short implant (5 × 6 mm) (Neodent), and IV: short implants (5 × 6 mm) (Bicon). The models were positioned in a circular polariscope associated with a cell load and static axial (0.5 Kgf) and nonaxial load (15°, 0.5 Kgf) were applied to each group for both prosthetic crown heights. Three-way ANOVA was used to compare the factors implant length, crown height, and implant system (α = 0.05). The results showed that implant length was a statistically significant factor for both axial and nonaxial loading. The 13 mm prosthetic crown did not result in statistically significant differences in stress distribution between the implant systems and implant lengths studied, regardless of load type (P > 0.05). It can be concluded that short implants showed higher stress levels than long implants. Implant system and length was not relevant factors when prosthetic crown height were increased.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
This paper presents a, simple two dimensional frame formulation to deal with structures undergoing large motions due to dynamic actions including very thin inflatable structures, balloons. The proposed methodology is based on the minimum potential energy theorem written regarding nodal positions. Velocity, acceleration and strain are achieved directly from positions, not. displacements, characterizing the novelty of the proposed technique. A non-dimensional space is created and the deformation function (change of configuration) is written following two independent mappings from which the strain energy function is written. The classical New-mark equations are used to integrate time. Dumping and non-conservative forces are introduced into the mechanical system by a rheonomic energy function. The final formulation has the advantage of being simple and easy to teach, when compared to classical Counterparts. The behavior of a bench-mark problem (spin-up maneuver) is solved to prove the formulation regarding high circumferential speed applications. Other examples are dedicated to inflatable and very thin structures, in order to test the formulation for further analysis of three dimensional balloons.
Resumo:
Aims. In an earlier paper we introduced a new method for determining asteroid families where families were identified in the proper frequency domain (n, g, g + s) ( where n is the mean-motion, and g and s are the secular frequencies of the longitude of pericenter and nodes, respectively), rather than in the proper element domain (a, e, sin(i)) (semi-major axis, eccentricity, and inclination). Here we improve our techniques for reliably identifying members of families that interact with nonlinear secular resonances of argument other than g or g + s and for asteroids near or in mean-motion resonant configurations. Methods. We introduce several new distance metrics in the frequency space optimal for determining the diffusion in secular resonances of argument 2g - s, 3g - s, g - s, s, and 2s. We also regularize the dependence of the g frequency as a function of the n frequency (Vesta family) or of the eccentricity e (Hansa family). Results. Our new approaches allow us to recognize as family members objects that were lost with previous methods, while keeping the advantages of the Carruba & Michtchenko (2007, A& A, 475, 1145) approach. More important, an analysis in the frequency domain permits a deeper understanding of the dynamical evolution of asteroid families not always obtainable with an analysis in the proper element domain.
Resumo:
In this work an iterative strategy is developed to tackle the problem of coupling dimensionally-heterogeneous models in the context of fluid mechanics. The procedure proposed here makes use of a reinterpretation of the original problem as a nonlinear interface problem for which classical nonlinear solvers can be applied. Strong coupling of the partitions is achieved while dealing with different codes for each partition, each code in black-box mode. The main application for which this procedure is envisaged arises when modeling hydraulic networks in which complex and simple subsystems are treated using detailed and simplified models, correspondingly. The potentialities and the performance of the strategy are assessed through several examples involving transient flows and complex network configurations.
Resumo:
This paper studies a nonlinear, discrete-time matrix system arising in the stability analysis of Kalman filters. These systems present an internal coupling between the state components that gives rise to complex dynamic behavior. The problem of partial stability, which requires that a specific component of the state of the system converge exponentially, is studied and solved. The convergent state component is strongly linked with the behavior of Kalman filters, since it can be used to provide bounds for the error covariance matrix under uncertainties in the noise measurements. We exploit the special features of the system-mainly the connections with linear systems-to obtain an algebraic test for partial stability. Finally, motivated by applications in which polynomial divergence of the estimates is acceptable, we study and solve a partial semistability problem.