956 resultados para quadratic polynomial
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
Although interests in assessing the relationship between temperature and mortality have arisen due to climate change, relatively few data are available on lag structure of temperature-mortality relationship, particularly in the Southern Hemisphere. This study identified the lag effects of mean temperature on mortality among age groups and death categories using polynomial distributed lag models in Brisbane, Australia, a subtropical city, 1996-2004. For a 1 °C increase above the threshold, the highest percent increase in mortality on the current day occurred among people over 85 years (7.2% (95% CI: 4.3%, 10.2%)). The effect estimates among cardiovascular deaths were higher than those among all-cause mortality. For a 1 °C decrease below the threshold, the percent increases in mortality at 21 lag days were 3.9% (95% CI: 1.9%, 6.0%) and 3.4% (95% CI: 0.9%, 6.0%) for people aged over 85 years and with cardiovascular diseases, respectively. These findings may have implications for developing intervention strategies to reduce and prevent temperature-related mortality.
Resumo:
Objective To quantify the lagged effects of mean temperature on deaths from cardiovascular diseases in Brisbane, Australia. Design Polynomial distributed lag models were used to assess the percentage increase in mortality up to 30 days associated with an increase (or decrease) of 1°C above (or below) the threshold temperature. Setting Brisbane, Australia. Patients 22 805 cardiovascular deaths registered between 1996 and 2004. Main outcome measures Deaths from cardiovascular diseases. Results The results show a longer lagged effect in cold days and a shorter lagged effect in hot days. For the hot effect, a statistically significant association was observed only for lag 0–1 days. The percentage increase in mortality was found to be 3.7% (95% CI 0.4% to 7.1%) for people aged ≥65 years and 3.5% (95% CI 0.4% to 6.7%) for all ages associated with an increase of 1°C above the threshold temperature of 24°C. For the cold effect, a significant effect of temperature was found for 10–15 lag days. The percentage estimates for older people and all ages were 3.1% (95% CI 0.7% to 5.7%) and 2.8% (95% CI 0.5% to 5.1%), respectively, with a decrease of 1°C below the threshold temperature of 24°C. Conclusions The lagged effects lasted longer for cold temperatures but were apparently shorter for hot temperatures. There was no substantial difference in the lag effect of temperature on mortality between all ages and those aged ≥65 years in Brisbane, Australia.
Resumo:
In this paper we consider the case of large cooperative communication systems where terminals use the protocol known as slotted amplify-and-forward protocol to aid the source in its transmission. Using the perturbation expansion methods of resolvents and large deviation techniques we obtain an expression for the Stieltjes transform of the asymptotic eigenvalue distribution of a sample covariance random matrix of the type HH† where H is the channel matrix of the transmission model for the transmission protocol we consider. We prove that the resulting expression is similar to the Stieltjes transform in its quadratic equation form for the Marcenko-Pastur distribution.
Resumo:
The paper "the importance of convexity in learning with squared loss" gave a lower bound on the sample complexity of learning with quadratic loss using a nonconvex function class. The proof contains an error. We show that the lower bound is true under a stronger condition that holds for many cases of interest.
Resumo:
We consider the problem of structured classification, where the task is to predict a label y from an input x, and y has meaningful internal structure. Our framework includes supervised training of Markov random fields and weighted context-free grammars as special cases. We describe an algorithm that solves the large-margin optimization problem defined in [12], using an exponential-family (Gibbs distribution) representation of structured objects. The algorithm is efficient—even in cases where the number of labels y is exponential in size—provided that certain expectations under Gibbs distributions can be calculated efficiently. The method for structured labels relies on a more general result, specifically the application of exponentiated gradient updates [7, 8] to quadratic programs.
Resumo:
Background Oxidative stress plays a role in acute and chronic inflammatory disease and antioxidant supplementation has demonstrated beneficial effects in the treatment of these conditions. This study was designed to determine the optimal dose of an antioxidant supplement in healthy volunteers to inform a Phase 3 clinical trial. Methods The study was designed as a combined Phase 1 and 2 open label, forced titration dose response study in healthy volunteers (n = 21) to determine both acute safety and efficacy. Participants received a dietary supplement in a forced titration over five weeks commencing with a no treatment baseline through 1, 2, 4 and 8 capsules. The primary outcome measurement was ex vivo changes in serum oxygen radical absorbance capacity (ORAC). The secondary outcome measures were undertaken as an exploratory investigation of immune function. Results A significant increase in antioxidant activity (serum ORAC) was observed between baseline (no capsules) and the highest dose of 8 capsules per day (p = 0.040) representing a change of 36.6%. A quadratic function for dose levels was fitted in order to estimate a dose response curve for estimating the optimal dose. The quadratic component of the curve was significant (p = 0.047), with predicted serum ORAC scores increasing from the zero dose to a maximum at a predicted dose of 4.7 capsules per day and decreasing for higher doses. Among the secondary outcome measures, a significant dose effect was observed on phagocytosis of granulocytes, and a significant increase was also observed on Cox 2 expression. Conclusion This study suggests that Ambrotose AO® capsules appear to be safe and most effective at a dosage of 4 capsules/day. It is important that this study is not over interpreted; it aimed to find an optimal dose to assess the dietary supplement using a more rigorous clinical trial design. The study achieved this aim and demonstrated that the dietary supplement has the potential to increase antioxidant activity. The most significant limitation of this study was that it was open label Phase 1/Phase 2 trial and is subject to potential bias that is reduced with the use of randomization and blinding. To confirm the benefits of this dietary supplement these effects now need to be demonstrated in a Phase 3 randomised controlled trial (RCT).
Resumo:
Background The majority of peptide bonds in proteins are found to occur in the trans conformation. However, for proline residues, a considerable fraction of Prolyl peptide bonds adopt the cis form. Proline cis/trans isomerization is known to play a critical role in protein folding, splicing, cell signaling and transmembrane active transport. Accurate prediction of proline cis/trans isomerization in proteins would have many important applications towards the understanding of protein structure and function. Results In this paper, we propose a new approach to predict the proline cis/trans isomerization in proteins using support vector machine (SVM). The preliminary results indicated that using Radial Basis Function (RBF) kernels could lead to better prediction performance than that of polynomial and linear kernel functions. We used single sequence information of different local window sizes, amino acid compositions of different local sequences, multiple sequence alignment obtained from PSI-BLAST and the secondary structure information predicted by PSIPRED. We explored these different sequence encoding schemes in order to investigate their effects on the prediction performance. The training and testing of this approach was performed on a newly enlarged dataset of 2424 non-homologous proteins determined by X-Ray diffraction method using 5-fold cross-validation. Selecting the window size 11 provided the best performance for determining the proline cis/trans isomerization based on the single amino acid sequence. It was found that using multiple sequence alignments in the form of PSI-BLAST profiles could significantly improve the prediction performance, the prediction accuracy increased from 62.8% with single sequence to 69.8% and Matthews Correlation Coefficient (MCC) improved from 0.26 with single local sequence to 0.40. Furthermore, if coupled with the predicted secondary structure information by PSIPRED, our method yielded a prediction accuracy of 71.5% and MCC of 0.43, 9% and 0.17 higher than the accuracy achieved based on the singe sequence information, respectively. Conclusion A new method has been developed to predict the proline cis/trans isomerization in proteins based on support vector machine, which used the single amino acid sequence with different local window sizes, the amino acid compositions of local sequence flanking centered proline residues, the position-specific scoring matrices (PSSMs) extracted by PSI-BLAST and the predicted secondary structures generated by PSIPRED. The successful application of SVM approach in this study reinforced that SVM is a powerful tool in predicting proline cis/trans isomerization in proteins and biological sequence analysis.
Resumo:
We describe a model of computation of the parallel type, which we call 'computing with bio-agents', based on the concept that motions of biological objects such as bacteria or protein molecular motors in confined spaces can be regarded as computations. We begin with the observation that the geometric nature of the physical structures in which model biological objects move modulates the motions of the latter. Consequently, by changing the geometry, one can control the characteristic trajectories of the objects; on the basis of this, we argue that such systems are computing devices. We investigate the computing power of mobile bio-agent systems and show that they are computationally universal in the sense that they are capable of computing any Boolean function in parallel. We argue also that using appropriate conditions, bio-agent systems can solve NP-complete problems in probabilistic polynomial time.
Resumo:
Resolving a noted open problem, we show that the Undirected Feedback Vertex Set problem, parameterized by the size of the solution set of vertices, is in the parameterized complexity class Poly(k), that is, polynomial-time pre-processing is sufficient to reduce an initial problem instance (G, k) to a decision-equivalent simplified instance (G', k') where k' � k, and the number of vertices of G' is bounded by a polynomial function of k. Our main result shows an O(k11) kernelization bound.
Resumo:
In recent years, development of Unmanned Aerial Vehicles (UAV) has become a significant growing segment of the global aviation industry. These vehicles are developed with the intention of operating in regions where the presence of onboard human pilots is either too risky or unnecessary. Their popularity with both the military and civilian sectors have seen the use of UAVs in a diverse range of applications, from reconnaissance and surveillance tasks for the military, to civilian uses such as aid relief and monitoring tasks. Efficient energy utilisation on an UAV is essential to its functioning, often to achieve the operational goals of range, endurance and other specific mission requirements. Due to the limitations of the space available and the mass budget on the UAV, it is often a delicate balance between the onboard energy available (i.e. fuel) and achieving the operational goals. This thesis presents an investigation of methods for increasing the energy efficiency on UAVs. One method is via the development of a Mission Waypoint Optimisation (MWO) procedure for a small fixed-wing UAV, focusing on improving the onboard fuel economy. MWO deals with a pre-specified set of waypoints by modifying the given waypoints within certain limits to achieve its optimisation objectives of minimising/maximising specific parameters. A simulation model of a UAV was developed in the MATLAB Simulink environment, utilising the AeroSim Blockset and the in-built Aerosonde UAV block and its parameters. This simulation model was separately integrated with a multi-objective Evolutionary Algorithm (MOEA) optimiser and a Sequential Quadratic Programming (SQP) solver to perform single-objective and multi-objective optimisation procedures of a set of real-world waypoints in order to minimise the onboard fuel consumption. The results of both procedures show potential in reducing fuel consumption on a UAV in a ight mission. Additionally, a parallel Hybrid-Electric Propulsion System (HEPS) on a small fixedwing UAV incorporating an Ideal Operating Line (IOL) control strategy was developed. An IOL analysis of an Aerosonde engine was performed, and the most efficient (i.e. provides greatest torque output at the least fuel consumption) points of operation for this engine was determined. Simulation models of the components in a HEPS were designed and constructed in the MATLAB Simulink environment. It was demonstrated through simulation that an UAV with the current HEPS configuration was capable of achieving a fuel saving of 6.5%, compared to the ICE-only configuration. These components form the basis for the development of a complete simulation model of a Hybrid-Electric UAV (HEUAV).
Resumo:
Higher order spectral analysis is used to investigate nonlinearities in time series of voltages measured from a realization of Chua's circuit. For period-doubled limit cycles, quadratic and cubic nonlinear interactions result in phase coupling and energy exchange between increasing numbers of triads and quartets of Fourier components as the nonlinearity of the system is increased. For circuit parameters that result in a chaotic Rossler-type attractor, bicoherence and tricoherence spectra indicate that both quadratic and cubic nonlinear interactions are important to the dynamics. When the circuit exhibits a double-scroll chaotic attractor the bispectrum is zero, but the tricoherences are high, consistent with the importance of higher-than-second order nonlinear interactions during chaos associated with the double scroll.
Resumo:
Higher-order spectral (bispectral and trispectral) analyses of numerical solutions of the Duffing equation with a cubic stiffness are used to isolate the coupling between the triads and quartets, respectively, of nonlinearly interacting Fourier components of the system. The Duffing oscillator follows a period-doubling intermittency catastrophic route to chaos. For period-doubled limit cycles, higher-order spectra indicate that both quadratic and cubic nonlinear interactions are important to the dynamics. However, when the Duffing oscillator becomes chaotic, global behavior of the cubic nonlinearity becomes dominant and quadratic nonlinear interactions are weak, while cubic interactions remain strong. As the nonlinearity of the system is increased, the number of excited Fourier components increases, eventually leading to broad-band power spectra for chaos. The corresponding higher-order spectra indicate that although some individual nonlinear interactions weaken as nonlinearity increases, the number of nonlinearly interacting Fourier modes increases. Trispectra indicate that the cubic interactions gradually evolve from encompassing a few quartets of Fourier components for period-1 motion to encompassing many quartets for chaos. For chaos, all the components within the energetic part of the power spectrum are cubically (but not quadratically) coupled to each other.
Resumo:
BACKGROUND: The relationship between temperature and mortality has been explored for decades and many temperature indicators have been applied separately. However, few data are available to show how the effects of different temperature indicators on different mortality categories, particularly in a typical subtropical climate. OBJECTIVE: To assess the associations between various temperature indicators and different mortality categories in Brisbane, Australia during 1996-2004. METHODS: We applied two methods to assess the threshold and temperature indicator for each age and death groups: mean temperature and the threshold assessed from all cause mortality was used for all mortality categories; the specific temperature indicator and the threshold for each mortality category were identified separately according to the minimisation of AIC. We conducted polynomial distributed lag non-linear model to identify effect estimates in mortality with one degree of temperature increase (or decrease) above (or below) the threshold on current days and lagged effects using both methods. RESULTS: Akaike's Information Criterion was minimized when mean temperature was used for all non-external deaths and deaths from 75 to 84 years; when minimum temperature was used for deaths from 0 to 64 years, 65-74 years, ≥ 85 years, and from the respiratory diseases; when maximum temperature was used for deaths from cardiovascular diseases. The effect estimates using certain temperature indicators were similar as mean temperature both for current day and lag effects. CONCLUSION: Different age groups and death categories were sensitive to different temperature indicators. However, the effect estimates from certain temperature indicators did not significantly differ from those of mean temperature.