56 resultados para Linearly Lindelöf
em Queensland University of Technology - ePrints Archive
Resumo:
Similarity solutions for flow over an impermeable, non-linearly (quadratic) stretching sheet were studied recently by Raptis and Perdikis (Int. J. Non-linear Mech. 41 (2006) 527–529) using a stream function of the form ψ=αxf(η)+βx2g(η). A fundamental error in their problem formulation is pointed out. On correction, it is shown that similarity solutions do not exist for this choice of ψ
Resumo:
Non-linear feedback shift register (NLFSR) ciphers are cryptographic tools of choice of the industry especially for mobile communication. Their attractive feature is a high efficiency when implemented in hardware or software. However, the main problem of NLFSR ciphers is that their security is still not well investigated. The paper makes a progress in the study of the security of NLFSR ciphers. In particular, we show a distinguishing attack on linearly filtered NLFSR (or LF-NLFSR) ciphers. We extend the attack to a linear combination of LF-NLFSRs. We investigate the security of a modified version of the Grain stream cipher and show its vulnerability to both key recovery and distinguishing attacks.
Resumo:
To enhance the performance of the k-nearest neighbors approach in forecasting short-term traffic volume, this paper proposed and tested a two-step approach with the ability of forecasting multiple steps. In selecting k-nearest neighbors, a time constraint window is introduced, and then local minima of the distances between the state vectors are ranked to avoid overlappings among candidates. Moreover, to control extreme values’ undesirable impact, a novel algorithm with attractive analytical features is developed based on the principle component. The enhanced KNN method has been evaluated using the field data, and our comparison analysis shows that it outperformed the competing algorithms in most cases.
Resumo:
The electrochemical characteristics of a series of heteroleptic tris(phthalocyaninato) complexes with identical rare earths or mixed rare earths (Pc)M(OOPc)M(OOPc) [M = Eu...Lu, Y; H2Pc = unsubstituted phthalocyanine, H2(OOPc) = 3,4,12,13,21,22,30,31-octakis(octyloxy)phthalocyanine] and (Pc)Eu(OOPc)Er(OOPc) have been recorded and studied comparatively by cyclic voltammetry (CV) and differential pulse voltammetry (DPV) in CH2Cl2 containing 0.1 M tetrabutylammonium perchlorate (TBAP). Up to five quasi-reversible one-electron oxidations and four one-electron reductions have been revealed. The half-wave potentials of the first, second and fifth oxidations depend on the size of the metal center, but the fifth changes in the opposite direction to that of the first two. Moreover, the difference in redox potentials of the first oxidation and first reduction for (Pc)M(OOPc)M(OOPc), 0.85−0.98 V, also decreases linearly along with decreasing rare earth ion radius, clearly showing the rare earth ion size effect and indicating enhanced π−π interactions in the triple-deckers connected by smaller lanthanides. This order follows the red-shift seen in the lowest energy band of triple-decker compounds. The electronic differences between the lanthanides and yttrium are more apparent for triple-decker sandwich complexes than for the analogous double-deckers. By comparing triple-decker, double-decker and mononuclear [ZnII] complexes containing the OOPc ligand, the HOMO−LUMO gap has been shown to contract approximately linearly with the number of stacked phthalocyanine ligands.
Resumo:
Areal bone mineral density (aBMD) is the most common surrogate measurement for assessing the bone strength of the proximal femur associated with osteoporosis. Additional factors, however, contribute to the overall strength of the proximal femur, primarily the anatomical geometry. Finite element analysis (FEA) is an effective and widely used computerbased simulation technique for modeling mechanical loading of various engineering structures, providing predictions of displacement and induced stress distribution due to the applied load. FEA is therefore inherently dependent upon both density and anatomical geometry. FEA may be performed on both three-dimensional and two-dimensional models of the proximal femur derived from radiographic images, from which the mechanical stiffness may be redicted. It is examined whether the outcome measures of two-dimensional FEA, two-dimensional, finite element analysis of X-ray images (FEXI), and three-dimensional FEA computed stiffness of the proximal femur were more sensitive than aBMD to changes in trabecular bone density and femur geometry. It is assumed that if an outcome measure follows known trends with changes in density and geometric parameters, then an increased sensitivity will be indicative of an improved prediction of bone strength. All three outcome measures increased non-linearly with trabecular bone density, increased linearly with cortical shell thickness and neck width, decreased linearly with neck length, and were relatively insensitive to neck-shaft angle. For femoral head radius, aBMD was relatively insensitive, with two-dimensional FEXI and threedimensional FEA demonstrating a non-linear increase and decrease in sensitivity, respectively. For neck anteversion, aBMD decreased non-linearly, whereas both two-dimensional FEXI and three dimensional FEA demonstrated a parabolic-type relationship, with maximum stiffness achieved at an angle of approximately 15o. Multi-parameter analysis showed that all three outcome measures demonstrated their highest sensitivity to a change in cortical thickness. When changes in all input parameters were considered simultaneously, three and twodimensional FEA had statistically equal sensitivities (0.41±0.20 and 0.42±0.16 respectively, p = ns) that were significantly higher than the sensitivity of aBMD (0.24±0.07; p = 0.014 and 0.002 for three-dimensional and two-dimensional FEA respectively). This simulation study suggests that since mechanical integrity and FEA are inherently dependent upon anatomical geometry, FEXI stiffness, being derived from conventional two-dimensional radiographic images, may provide an improvement in the prediction of bone strength of the proximal femur than currently provided by aBMD.
Resumo:
A major focus of research in nanotechnology is the development of novel, high throughput techniques for fabrication of arbitrarily shaped surface nanostructures of sub 100 nm to atomic scale. A related pursuit is the development of simple and efficient means for parallel manipulation and redistribution of adsorbed atoms, molecules and nanoparticles on surfaces – adparticle manipulation. These techniques will be used for the manufacture of nanoscale surface supported functional devices in nanotechnologies such as quantum computing, molecular electronics and lab-on-achip, as well as for modifying surfaces to obtain novel optical, electronic, chemical, or mechanical properties. A favourable approach to formation of surface nanostructures is self-assembly. In self-assembly, nanostructures are grown by aggregation of individual adparticles that diffuse by thermally activated processes on the surface. The passive nature of this process means it is generally not suited to formation of arbitrarily shaped structures. The self-assembly of nanostructures at arbitrary positions has been demonstrated, though these have typically required a pre-patterning treatment of the surface using sophisticated techniques such as electron beam lithography. On the other hand, a parallel adparticle manipulation technique would be suited for directing the selfassembly process to occur at arbitrary positions, without the need for pre-patterning the surface. There is at present a lack of techniques for parallel manipulation and redistribution of adparticles to arbitrary positions on the surface. This is an issue that needs to be addressed since these techniques can play an important role in nanotechnology. In this thesis, we propose such a technique – thermal tweezers. In thermal tweezers, adparticles are redistributed by localised heating of the surface. This locally enhances surface diffusion of adparticles so that they rapidly diffuse away from the heated regions. Using this technique, the redistribution of adparticles to form a desired pattern is achieved by heating the surface at specific regions. In this project, we have focussed on the holographic implementation of this approach, where the surface is heated by holographic patterns of interfering pulsed laser beams. This implementation is suitable for the formation of arbitrarily shaped structures; the only condition is that the shape can be produced by holographic means. In the simplest case, the laser pulses are linearly polarised and intersect to form an interference pattern that is a modulation of intensity along a single direction. Strong optical absorption at the intensity maxima of the interference pattern results in approximately a sinusoidal variation of the surface temperature along one direction. The main aim of this research project is to investigate the feasibility of the holographic implementation of thermal tweezers as an adparticle manipulation technique. Firstly, we investigate theoretically the surface diffusion of adparticles in the presence of sinusoidal modulation of the surface temperature. Very strong redistribution of adparticles is predicted when there is strong interaction between the adparticle and the surface, and the amplitude of the temperature modulation is ~100 K. We have proposed a thin metallic film deposited on a glass substrate heated by interfering laser beams (optical wavelengths) as a means of generating very large amplitude of surface temperature modulation. Indeed, we predict theoretically by numerical solution of the thermal conduction equation that amplitude of the temperature modulation on the metallic film can be much greater than 100 K when heated by nanosecond pulses with an energy ~1 mJ. The formation of surface nanostructures of less than 100 nm in width is predicted at optical wavelengths in this implementation of thermal tweezers. Furthermore, we propose a simple extension to this technique where spatial phase shift of the temperature modulation effectively doubles or triples the resolution. At the same time, increased resolution is predicted by reducing the wavelength of the laser pulses. In addition, we present two distinctly different, computationally efficient numerical approaches for theoretical investigation of surface diffusion of interacting adparticles – the Monte Carlo Interaction Method (MCIM) and the random potential well method (RPWM). Using each of these approaches we have investigated thermal tweezers for redistribution of both strongly and weakly interacting adparticles. We have predicted that strong interactions between adparticles can increase the effectiveness of thermal tweezers, by demonstrating practically complete adparticle redistribution into the low temperature regions of the surface. This is promising from the point of view of thermal tweezers applied to directed self-assembly of nanostructures. Finally, we present a new and more efficient numerical approach to theoretical investigation of thermal tweezers of non-interacting adparticles. In this approach, the local diffusion coefficient is determined from solution of the Fokker-Planck equation. The diffusion equation is then solved numerically using the finite volume method (FVM) to directly obtain the probability density of adparticle position. We compare predictions of this approach to those of the Ermak algorithm solution of the Langevin equation, and relatively good agreement is shown at intermediate and high friction. In the low friction regime, we predict and investigate the phenomenon of ‘optimal’ friction and describe its occurrence due to very long jumps of adparticles as they diffuse from the hot regions of the surface. Future research directions, both theoretical and experimental are also discussed.
Resumo:
Aims: Dietary glycemic index (GI) and glycemic load (GL) have been associated with risk of chronic diseases, yet limited research exists on patterns of consumption in Australia. Our aims were to investigate glycemic carbohydrate in a population of older women, identify major contributing food sources, and determine low, moderate and high ranges. Methods: Subjects were 459 Brisbane women aged 42-81 years participating in the Longitudinal Assessment of Ageing in Women. Diet history interviews were used to assess usual diet and results were analysed into energy and macronutrients using the FoodWorks dietary analysis program combined with a customised GI database. Results: Mean±SD dietary GI was 55.6±4.4% and mean dietary GL was 115±25. A low GI in this population was ≤52.0, corresponding to the lowest quintile of dietary GI, and a low GL was ≤95. GI showed a quadratic relationship with age (P=0.01), with a slight decrease observed in women aged in their 60’s relative to younger or older women. GL decreased linearly with age (P<0.001). Bread was the main contributor to carbohydrate and dietary GL (17.1% and 20.8%, respectively), followed by fruit (15.5% and 14.2%), and dairy for carbohydrate (9.0%) or breakfast cereals for GL (8.9%). Conclusions: In this population, dietary GL decreased with increasing age, however this was likely to be a result of higher energy intakes in younger women. Focus on careful selection of lower GI items within bread and breakfast cereal food groups would be an effective strategy for decreasing dietary GL in this population of older women.
Resumo:
On-axis monochromatic higher-order aberrations increase with age. Few studies have been made of peripheral refraction along the horizontal meridian of older eyes, and none of their off-axis higher-order aberrations. We measured wave aberrations over the central 42°x32° visual field for a 5mm pupil in 10 young and 7 older emmetropes. Patterns of peripheral refraction were similar in the two groups. Coma increased linearly with field angle at a significantly higher rate in older than in young emmetropes (−0.018±0.007 versus −0.006±0.002 µm/deg). Spherical aberration was almost constant over the measured field in both age groups and mean values across the field were significantly higher in older than in young emmetropes (+0.08±0.05 versus +0.02±0.04 µm). Total root-mean-square and higher-order aberrations increased more rapidly with field angle in the older emmetropes. However, the limits to monochromatic peripheral retinal image quality are largely determined by the second-order aberrations, which do not change markedly with age, and under normal conditions the relative importance of the increased higher-order aberrations in older eyes is lessened by the reduction in pupil diameter with age. Therefore it is unlikely that peripheral visual performance deficits observed in normal older individuals are primarily attributable to the increased impact of higher-order aberration.
Resumo:
The present paper motivates the study of mind change complexity for learning minimal models of length-bounded logic programs. It establishes ordinal mind change complexity bounds for learnability of these classes both from positive facts and from positive and negative facts. Building on Angluin’s notion of finite thickness and Wright’s work on finite elasticity, Shinohara defined the property of bounded finite thickness to give a sufficient condition for learnability of indexed families of computable languages from positive data. This paper shows that an effective version of Shinohara’s notion of bounded finite thickness gives sufficient conditions for learnability with ordinal mind change bound, both in the context of learnability from positive data and for learnability from complete (both positive and negative) data. Let Omega be a notation for the first limit ordinal. Then, it is shown that if a language defining framework yields a uniformly decidable family of languages and has effective bounded finite thickness, then for each natural number m >0, the class of languages defined by formal systems of length <= m: • is identifiable in the limit from positive data with a mind change bound of Omega (power)m; • is identifiable in the limit from both positive and negative data with an ordinal mind change bound of Omega × m. The above sufficient conditions are employed to give an ordinal mind change bound for learnability of minimal models of various classes of length-bounded Prolog programs, including Shapiro’s linear programs, Arimura and Shinohara’s depth-bounded linearly covering programs, and Krishna Rao’s depth-bounded linearly moded programs. It is also noted that the bound for learning from positive data is tight for the example classes considered.
Resumo:
Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.
Resumo:
A special transmit polarization signalling scheme is presented to alleviate the power reduction as a result of polarization mismatch from random antenna orientations. This is particularly useful for hand held mobile terminals typically equipped with only a single linearly polarized antenna, since the average signal power is desensitized against receiver orientations. Numerical simulations also show adequate robustness against incorrect channel estimations.
Resumo:
A basic understanding of the relationships between rainfall intensity, duration of rainfall and the amount of suspended particles in stormwater runoff generated from road surfaces has been gained mainly from past washoff experiments using rainfall simulators. Simulated rainfall was generally applied at constant intensities, whereas rainfall temporal patterns during actual storms are typically highly variable. This paper discusses a rationale for the application of the constant-intensity washoff concepts to actual storm event runoff. The rationale is tested using suspended particle load data collected at a road site located in Toowoomba, Australia. Agreement between the washoff concepts and measured data is most consistent for intermediate-duration storms (duration <5 h and >1 h). Particle loads resulting from these storm events increase linearly with average rainfall intensity. Above a threshold intensity, there is evidence to suggest a constant or plateau particle load is reached. The inclusion of a peak discharge factor (maximum 6 min rainfall intensity) enhances the ability to predict particle loads.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
A scaling analysis is performed for the transient boundary layer established adjacent to an inclined flat plate following a ramp cooling boundary condition. The imposed wall temperature decreases linearly up to a specific value over a specific time. It is revealed that if the ramp time is sufficiently large then the boundary layer reaches quasi-steady mode before the growth of the temperature is finished. However, if the ramp time is shorter then the steady state of the boundary layer may be reached after the growth of the temperature is completed. In this case, the ultimate steady state is the same as if the start up had been instantaneous. Note that the cold boundary layer adjacent to the plate is potentially unstable to Rayleigh-Bénard instability if the Rayleigh number exceeds a certain critical value for this cooling case. The onset of instability may set in at different stages of the boundary layer development. A proper identification of the time when the instability may set in is discussed. A numerical verification of the time for the onset of instability is presented in this study. Different flow regimes based on the stability of the boundary layer have also been discussed with numerical results.
Resumo:
Unsteady natural convection inside a triangular cavity subject to a non-instantaneous heating on the inclined walls in the form of an imposed temperature which increases linearly up to a prescribed steady value over a prescribed time is reported. The development of the flow from start-up to a steady-state has been described based on scaling analyses and direct numerical simulations. The ramp temperature has been chosen in such a way that the boundary layer is reached a quasi-steady mode before the growth of the temperature is completed. In this mode the thermal boundary layer at first grows in thickness, then contracts with increasing time. However, if the imposed wall temperature growth period is sufficiently short, the boundary layer develops differently. It is seen that the shape of many houses are isosceles triangular cross-section. The heat transfer process through the roof of the attic-shaped space should be well understood. Because, in the building energy, one of the most important objectives for design and construction of houses is to provide thermal comfort for occupants. Moreover, in the present energy-conscious society it is also a requirement for houses to be energy efficient, i.e. the energy consumption for heating or air-conditioning houses must be minimized.