937 resultados para Higher order interior point method
Resumo:
Traditional Optics has provided ways to compensate some common visual limitations (up to second order visual impairments) through spectacles or contact lenses. Recent developments in wavefront science make it possible to obtain an accurate model of the Point Spread Function (PSF) of the human eye. Through what is known as the "Wavefront Aberration Function" of the human eye, exact knowledge of the optical aberration of the human eye is possible, allowing a mathematical model of the PSF to be obtained. This model could be used to pre-compensate (inverse-filter) the images displayed on computer screens in order to counter the distortion in the user's eye. This project takes advantage of the fact that the wavefront aberration function, commonly expressed as a Zernike polynomial, can be generated from the ophthalmic prescription used to fit spectacles to a person. This allows the pre-compensation, or onscreen deblurring, to be done for various visual impairments, up to second order (commonly known as myopia, hyperopia, or astigmatism). The technique proposed towards that goal and results obtained using a lens, for which the PSF is known, that is introduced into the visual path of subjects without visual impairment will be presented. In addition to substituting the effect of spectacles or contact lenses in correcting the loworder visual limitations of the viewer, the significance of this approach is that it has the potential to address higher-order abnormalities in the eye, currently not correctable by simple means.
Resumo:
This work consists of the conception, developing and implementation of a Computational Routine CAE which has algorithms suitable for the tension and deformation analysis. The system was integrated to an academic software named as OrtoCAD. The expansion algorithms for the interface CAE genereated by this work were developed in FORTRAN with the objective of increase the applications of two former works of PPGEM-UFRN: project and fabrication of a Electromechanincal reader and Software OrtoCAD. The software OrtoCAD is an interface that, orinally, includes the visualization of prothetic cartridges from the data obtained from a electromechanical reader (LEM). The LEM is basically a tridimensional scanner based on reverse engineering. First, the geometry of a residual limb (i.e., the remaining part of an amputee leg wherein the prothesis is fixed) is obtained from the data generated by LEM by the use of Reverse Engineering concepts. The proposed core FEA uses the Shell's Theory where a 2D surface is generated from a 3D piece form OrtoCAD. The shell's analysis program uses the well-known Finite Elements Method to describe the geometry and the behavior of the material. The program is based square-based Lagragean elements of nine nodes and displacement field of higher order to a better description of the tension field in the thickness. As a result, the new FEA routine provide excellent advantages by providing new features to OrtoCAD: independency of high cost commercial softwares; new routines were added to the OrtoCAD library for more realistic problems by using criteria of fault engineering of composites materials; enhanced the performance of the FEA analysis by using a specific grid element for a higher number of nodes; and finally, it has the advantage of open-source project and offering customized intrinsic versatility and wide possibilities of editing and/or optimization that may be necessary in the future
Resumo:
One of the main problems related to the use of diesel as fuel is the presence of sulfur (S) which causes environmental pollution and corrosion of engines. In order to minimize the consequences of the release of this pollutant, Brazilian law established maximum sulfur content that diesel fuel may have. To meet these requirements, diesel with a maximum sulfur concentration equal to 10 mg/kg (S10) has been widely marketed in the country. However, the reduction of sulfur can lead to changes in the physicochemical properties of the fuel, which are essential for the performance of road vehicles. This work aims to identify the main changes in the physicochemical properties of diesel fuel and how they are related to reduction of sulfur content. Samples of diesel types S10, S500 and S1800 were tested according with the methods of the American Society for Testing and Materials (ASTM). The fuels were also characterized by thermogravimetric analysis (TG) and subjected to physical distillation (ASTM D86) and simulated distillation gas chromatography (ASTM D2887). The results showed that the reduction of sulfur turned the fuel lighter and fluid, allowing a greater applicability to low temperature environments and safer for transportation and storage. Through the simulated distillation data was observed that decreasing sulfur content resulted in higher initial boiling point temperatures and the decreasing of the boiling temperature of the medium and heavy fractions. Thermogravimetric analysis showed a loss event mass attributed to volatilization or distillation of light and medium hydrocarbons. Based on these data, the kinetic behavior of the samples was investigated and it was observed that the activation energies (Ea) did not show significant changes throughout conversion. Considering the average of these energies, the S1800 had the highest Ea during the conversion and the S10 the lowest values
Resumo:
The great interest in nonlinear system identification is mainly due to the fact that a large amount of real systems are complex and need to have their nonlinearities considered so that their models can be successfully used in applications of control, prediction, inference, among others. This work evaluates the application of Fuzzy Wavelet Neural Networks (FWNN) to identify nonlinear dynamical systems subjected to noise and outliers. Generally, these elements cause negative effects on the identification procedure, resulting in erroneous interpretations regarding the dynamical behavior of the system. The FWNN combines in a single structure the ability to deal with uncertainties of fuzzy logic, the multiresolution characteristics of wavelet theory and learning and generalization abilities of the artificial neural networks. Usually, the learning procedure of these neural networks is realized by a gradient based method, which uses the mean squared error as its cost function. This work proposes the replacement of this traditional function by an Information Theoretic Learning similarity measure, called correntropy. With the use of this similarity measure, higher order statistics can be considered during the FWNN training process. For this reason, this measure is more suitable for non-Gaussian error distributions and makes the training less sensitive to the presence of outliers. In order to evaluate this replacement, FWNN models are obtained in two identification case studies: a real nonlinear system, consisting of a multisection tank, and a simulated system based on a model of the human knee joint. The results demonstrate that the application of correntropy as the error backpropagation algorithm cost function makes the identification procedure using FWNN models more robust to outliers. However, this is only achieved if the gaussian kernel width of correntropy is properly adjusted.
Resumo:
This dissertation presents a study and experimental research on asymmetric coding of stereoscopic video. A review on 3D technologies, video formats and coding is rst presented and then particular emphasis is given to asymmetric coding of 3D content and performance evaluation methods, based on subjective measures, of methods using asymmetric coding. The research objective was de ned to be an extension of the current concept of asymmetric coding for stereo video. To achieve this objective the rst step consists in de ning regions in the spatial dimension of auxiliary view with di erent perceptual relevance within the stereo pair, which are identi ed by a binary mask. Then these regions are encoded with better quality (lower quantisation) for the most relevant ones and worse quality (higher quantisation) for the those with lower perceptual relevance. The actual estimation of the relevance of a given region is based on a measure of disparity according to the absolute di erence between views. To allow encoding of a stereo sequence using this method, a reference H.264/MVC encoder (JM) has been modi ed to allow additional con guration parameters and inputs. The nal encoder is still standard compliant. In order to show the viability of the method subjective assessment tests were performed over a wide range of objective qualities of the auxiliary view. The results of these tests allow us to prove 3 main goals. First, it is shown that the proposed method can be more e cient than traditional asymmetric coding when encoding stereo video at higher qualities/rates. The method can also be used to extend the threshold at which uniform asymmetric coding methods start to have an impact on the subjective quality perceived by the observers. Finally the issue of eye dominance is addressed. Results from stereo still images displayed over a short period of time showed it has little or no impact on the proposed method.
Resumo:
Recent work has demonstrated the strong qualitative differences between the dynamics near a glass transition driven by short-ranged repulsion and one governed by short-ranged attraction. Here, we study in detail the behavior of non-linear, higher-order correlation functions that measure the growth of length scales associated with dynamical heterogeneity in both types of systems. We find that this measure is qualitatively different in the repulsive and attractive cases with regards to the wave vector dependence as well as the time dependence of the standard non-linear four-point dynamical susceptibility. We discuss the implications of these results for the general understanding of dynamical heterogeneity in glass-forming liquids.
Resumo:
Research into the dynamicity of job performance criteria has found evidence suggesting the presence of rank-order changes to job performance scores across time as well as intraindividual trajectories in job performance scores across time. These findings have influenced a large body of research into (a) the dynamicity of validities of individual differences predictors of job performance and (b) the relationship between individual differences predictors of job performance and intraindividual trajectories of job performance. In the present dissertation, I addressed these issues within the context of the Five Factor Model of personality. The Five Factor Model is arranged hierarchically, with five broad higher-order factors subsuming a number of more narrowly tailored personality facets. Research has debated the relative merits of broad versus narrow traits for predicting job performance, but the entire body of research has addressed the issue from a static perspective -- by examining the relative magnitude of validities of global factors versus their facets. While research along these lines has been enlightening, theoretical perspectives suggest that the validities of global factors versus their facets may differ in their stability across time. Thus, research is needed to not only compare the relative magnitude of validities of global factors versus their facets at a single point in time, but also to compare the relative stability of validities of global factors versus their facets across time. Also necessary to advance cumulative knowledge concerning intraindividual performance trajectories is research into broad vs. narrow traits for predicting such trajectories. In the present dissertation, I addressed these issues using a four-year longitudinal design. The results indicated that the validities of global conscientiousness were stable across time, while the validities of conscientiousness facets were more likely to fluctuate. However, the validities of emotional stability and extraversion facets were no more likely to fluctuate across time than those of the factors. Finally, while some personality factors and facets predicted performance intercepts (i.e., performance at the first measurement occasion), my results failed to indicate a significant effect of any personality variable on performance growth. Implications for research and practice are discussed.
Resumo:
DNA sequences that are rich in the guanine nucleic base possess the ability to fold into higher order structures called G-quadruplexes. These higher level structures are formed as a result of two sets of four guanine bases hydrogen-bonding together in a planar arrangement called a guanine quartet. Guanine quartets subsequently stack upon each other to form quadruplexes. G-quadruplexes are mainly localized in telomeres as well as in oncogene promoters. One unique and promising therapeutic approach against cancer involves targeting and stabilizing G-quadruplexes with small molecules, generally in order to suppress oncogene expression and telomerase enzyme activity; the latter has been found to contribute to “out-of control” cell growth in ca. 80-85% of all cancer cells and primary tumours while being absent in normal somatic cells. In this work, we present efforts towards designing and synthesizing acridine-based macrocycles (Mh) and (Mb) with the purpose of providing potential G4 ligands that are suited for selective binding to G4 vs. duplex DNA, and stabilize G-quadruplex structures. Two ligands described in this study include an acridine core which provides an aromatic surface capable of π-π interactions with the surface of G-quadruplexes. The successful synthesis of 4,5-diaminoacridine is described in chapter 2, as an essential fragment of the macrocycles (Mh) and (Mb). In order to investigate the synthetic method for macrocyclization, model compounds composing almost half of the designed macrocycles were explored. As discussed in chapter 3, the synthesis of the model compound for (Mb) turned out to be challenging. However, as a step towards the synthesis of (Mh), the synthesis of the hydrogen-containing model compound, which is almost half of the desired macrocycle (Mh) was achieved in our group and proved to be promising.
Resumo:
In this thesis, novel analog-to-digital and digital-to-analog generalized time-interleaved variable bandpass sigma-delta modulators are designed, analysed, evaluated and implemented that are suitable for high performance data conversion for a broad-spectrum of applications. These generalized time-interleaved variable bandpass sigma-delta modulators can perform noise-shaping for any centre frequency from DC to Nyquist. The proposed topologies are well-suited for Butterworth, Chebyshev, inverse-Chebyshev and elliptical filters, where designers have the flexibility of specifying the centre frequency, bandwidth as well as the passband and stopband attenuation parameters. The application of the time-interleaving approach, in combination with these bandpass loop-filters, not only overcomes the limitations that are associated with conventional and mid-band resonator-based bandpass sigma-delta modulators, but also offers an elegant means to increase the conversion bandwidth, thereby relaxing the need to use faster or higher-order sigma-delta modulators. A step-by-step design technique has been developed for the design of time-interleaved variable bandpass sigma-delta modulators. Using this technique, an assortment of lower- and higher-order single- and multi-path generalized A/D variable bandpass sigma-delta modulators were designed, evaluated and compared in terms of their signal-to-noise ratios, hardware complexity, stability, tonality and sensitivity for ideal and non-ideal topologies. Extensive behavioural-level simulations verified that one of the proposed topologies not only used fewer coefficients but also exhibited greater robustness to non-idealties. Furthermore, second-, fourth- and sixth-order single- and multi-path digital variable bandpass digital sigma-delta modulators are designed using this technique. The mathematical modelling and evaluation of tones caused by the finite wordlengths of these digital multi-path sigmadelta modulators, when excited by sinusoidal input signals, are also derived from first principles and verified using simulation and experimental results. The fourth-order digital variable-band sigma-delta modulator topologies are implemented in VHDL and synthesized on Xilinx® SpartanTM-3 Development Kit using fixed-point arithmetic. Circuit outputs were taken via RS232 connection provided on the FPGA board and evaluated using MATLAB routines developed by the author. These routines included the decimation process as well. The experiments undertaken by the author further validated the design methodology presented in the work. In addition, a novel tunable and reconfigurable second-order variable bandpass sigma-delta modulator has been designed and evaluated at the behavioural-level. This topology offers a flexible set of choices for designers and can operate either in single- or dual-mode enabling multi-band implementations on a single digital variable bandpass sigma-delta modulator. This work is also supported by a novel user-friendly design and evaluation tool that has been developed in MATLAB/Simulink that can speed-up the design, evaluation and comparison of analog and digital single-stage and time-interleaved variable bandpass sigma-delta modulators. This tool enables the user to specify the conversion type, topology, loop-filter type, path number and oversampling ratio.
Resumo:
Wir betrachten zeitabhängige Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängi- gen Gebieten, wobei die Bewegung des Gebietsrandes bekannt ist. Die zeitliche Entwicklung des Gebietes wird durch die ALE-Formulierung behandelt, die die Nachteile der klassischen Euler- und Lagrange-Betrachtungsweisen behebt. Die Position des Randes und seine Geschwindigkeit werden dabei so in das Gebietsinnere fortgesetzt, dass starke Gitterdeformationen verhindert werden. Als Zeitdiskretisierungen höherer Ordnung werden stetige Galerkin-Petrov-Verfahren (cGP) und unstetige Galerkin-Verfahren (dG) auf Probleme in zeitabhängigen Gebieten angewendet. Weiterhin werden das C 1 -stetige Galerkin-Petrov-Verfahren und das C 0 -stetige Galerkin- Verfahren vorgestellt. Deren Lösungen lassen sich auch in zeitabhängigen Gebieten durch ein einfaches einheitliches Postprocessing aus der Lösung des cGP-Problems bzw. dG-Problems erhalten. Für Problemstellungen in festen Gebieten und mit zeitlich konstanten Konvektions- und Reaktionstermen werden Stabilitätsresultate sowie optimale Fehlerabschätzungen für die nachbereiteten Lösungen der cGP-Verfahren und der dG-Verfahren angegeben. Für zeitabhängige Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängigen Gebieten präsentieren wir konservative und nicht-konservative Formulierungen, wobei eine besondere Aufmerksamkeit der Behandlung der Zeitableitung und der Gittergeschwindigkeit gilt. Stabilität und optimale Fehlerschätzungen für die in der Zeit semi-diskretisierten konservativen und nicht-konservativen Formulierungen werden vorgestellt. Abschließend wird das volldiskretisierte Problem betrachtet, wobei eine Finite-Elemente-Methode zur Ortsdiskretisierung der Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängigen Gebieten im ALE-Rahmen einbezogen wurde. Darüber hinaus wird eine lokale Projektionsstabilisierung (LPS) eingesetzt, um der Konvektionsdominanz Rechnung zu tragen. Weiterhin wird numerisch untersucht, wie sich die Approximation der Gebietsgeschwindigkeit auf die Genauigkeit der Zeitdiskretisierungsverfahren auswirkt.
Resumo:
Valveless pulsejets are extremely simple aircraft engines; essentially cleverly designed tubes with no moving parts. These engines utilize pressure waves, instead of machinery, for thrust generation, and have demonstrated thrust-to-weight ratios over 8 and thrust specific fuel consumption levels below 1 lbm/lbf-hr – performance levels that can rival many gas turbines. Despite their simplicity and competitive performance, they have not seen widespread application due to extremely high noise and vibration levels, which have persisted as an unresolved challenge primarily due to a lack of fundamental insight into the operation of these engines. This thesis develops two theories for pulsejet operation (both based on electro-acoustic analogies) that predict measurements better than any previous theory reported in the literature, and then uses them to devise and experimentally validate effective noise reduction strategies. The first theory analyzes valveless pulsejets as acoustic ducts with axially varying area and temperature. An electro-acoustic analogy is used to calculate longitudinal mode frequencies and shapes for prescribed area and temperature distributions inside an engine. Predicted operating frequencies match experimental values to within 6% with the use of appropriate end corrections. Mode shapes are predicted and used to develop strategies for suppressing higher modes that are responsible for much of the perceived noise. These strategies are verified experimentally and via comparison to existing models/data for valveless pulsejets in the literature. The second theory analyzes valveless pulsejets as acoustic systems/circuits in which each engine component is represented by an acoustic impedance. These are assembled to form an equivalent circuit for the engine that is solved to find the frequency response. The theory is used to predict the behavior of two interacting pulsejet engines. It is validated via comparison to experiment and data in the literature. The technique is then used to develop and experimentally verify a method for operating two engines in anti-phase without interfering with thrust production. Finally, Helmholtz resonators are used to suppress higher order modes that inhibit noise suppression via anti-phasing. Experiments show that the acoustic output of two resonator-equipped pulsejets operating in anti-phase is 9 dBA less than the acoustic output of a single pulsejet.
Resumo:
Recent evidence suggest that academic staff face difficulties in applying new technologies as a means of assessing higher order assessment outcomes such as critical thinking, problem solving and creativity. Although higher education institutional mission statements and course unit outlines purport the value of these higher order skills there is still some question about how well academics are equipped to design curricula and, in particular, assessment strategies accordingly. Despite a rhetoric avowing the benefits of these higher order skills, it has been suggested that academics set assessment tasks up in such a way as to inadvertently lead students on the path towards lower order outcomes. This is a controversial claim, and one that this paper seeks to explore and critique in terms of challenging the conceptual basis of assessing higher order skills through new technologies. It is argued that the use of digital media in higher education is leading to a focus on student's ability to use and manipulate of these products as an index of their flexibility and adaptability to the demands of the knowledge economy. This focus mirrors market flexibility and encourages programmes and courses of study to be rhetorically packaged as such. Curricular content has becomes a means to procure more or less elaborate aggregates of attributes. Higher education is now charged with producing graduates who are entrepreneurial and creative in order to drive forward economic sustainability. It is argued that critical independent learning can take place through the democratisation afforded by cultural and knowledge digitization and that assessment needs to acknowledge the changing relations between audience and author, expert and amateur, creator and consumer.
Resumo:
We analyze the causal structure of the two-dimensional (2D) reduced background used in the perturbative treatment of a head-on collision of two D-dimensional Aichelburg–Sexl gravitational shock waves. After defining all causal boundaries, namely the future light-cone of the collision and the past light-cone of a future observer, we obtain characteristic coordinates using two independent methods. The first is a geometrical construction of the null rays which define the various light cones, using a parametric representation. The second is a transformation of the 2D reduced wave operator for the problem into a hyperbolic form. The characteristic coordinates are then compactified allowing us to represent all causal light rays in a conformal Carter–Penrose diagram. Our construction holds to all orders in perturbation theory. In particular, we can easily identify the singularities of the source functions and of the Green’s functions appearing in the perturbative expansion, at each order, which is crucial for a successful numerical evaluation of any higher order corrections using this method.
Resumo:
This article is concerned with the construction of general isotropic and anisotropic adaptive strategies, as well as hp-mesh refinement techniques, in combination with dual-weighted-residual a posteriori error indicators for the discontinuous Galerkin finite element discretization of compressible fluid flow problems.
Resumo:
Cloud edge mixing plays an important role in the life cycle and development of clouds. Entrainment of subsaturated air affects the cloud at the microscale, altering the number density and size distribution of its droplets. The resulting effect is determined by two timescales: the time required for the mixing event to complete, and the time required for the droplets to adjust to their new environment. If mixing is rapid, evaporation of droplets is uniform and said to be homogeneous in nature. In contrast, slow mixing (compared to the adjustment timescale) results in the droplets adjusting to the transient state of the mixture, producing an inhomogeneous result. Studying this process in real clouds involves the use of airborne optical instruments capable of measuring clouds at the `single particle' level. Single particle resolution allows for direct measurement of the droplet size distribution. This is in contrast to other `bulk' methods (i.e. hot-wire probes, lidar, radar) which measure a higher order moment of the distribution and require assumptions about the distribution shape to compute a size distribution. The sampling strategy of current optical instruments requires them to integrate over a path tens to hundreds of meters to form a single size distribution. This is much larger than typical mixing scales (which can extend down to the order of centimeters), resulting in difficulties resolving mixing signatures. The Holodec is an optical particle instrument that uses digital holography to record discrete, local volumes of droplets. This method allows for statistically significant size distributions to be calculated for centimeter scale volumes, allowing for full resolution at the scales important to the mixing process. The hologram also records the three dimensional position of all particles within the volume, allowing for the spatial structure of the cloud volume to be studied. Both of these features represent a new and unique view into the mixing problem. In this dissertation, holographic data recorded during two different field projects is analyzed to study the mixing structure of cumulus clouds. Using Holodec data, it is shown that mixing at cloud top can produce regions of clear but humid air that can subside down along the edge of the cloud as a narrow shell, or advect down shear as a `humid halo'. This air is then entrained into the cloud at lower levels, producing mixing that appears to be very inhomogeneous. This inhomogeneous-like mixing is shown to be well correlated with regions containing elevated concentrations of large droplets. This is used to argue in favor of the hypothesis that dilution can lead to enhanced droplet growth rates. I also make observations on the microscale spatial structure of observed cloud volumes recorded by the Holodec.