119 resultados para Second Order Parabolic Heat Equation
Resumo:
We develop and test a theoretically-based integrative model of organizational innovation adoption. Confirmatory factor analyses using responses from 134 organizations showed that the hypothesized second-order model was a better fit to the data than the traditional model of independent factors. Furthermore, although not all elements were significant, the hypothesized model fit adoption better than the traditional model.
Resumo:
Query reformulation is a key user behavior during Web search. Our research goal is to develop predictive models of query reformulation during Web searching. This article reports results from a study in which we automatically classified the query-reformulation patterns for 964,780 Web searching sessions, composed of 1,523,072 queries, to predict the next query reformulation. We employed an n-gram modeling approach to describe the probability of users transitioning from one query-reformulation state to another to predict their next state. We developed first-, second-, third-, and fourth-order models and evaluated each model for accuracy of prediction, coverage of the dataset, and complexity of the possible pattern set. The results show that Reformulation and Assistance account for approximately 45% of all query reformulations; furthermore, the results demonstrate that the first- and second-order models provide the best predictability, between 28 and 40% overall and higher than 70% for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance.
Resumo:
This paper reports results from a study in which we automatically classified the query reformulation patterns for 964,780 Web searching sessions (composed of 1,523,072 queries) in order to predict what the next query reformulation would be. We employed an n-gram modeling approach to describe the probability of searchers transitioning from one query reformulation state to another and predict their next state. We developed first, second, third, and fourth order models and evaluated each model for accuracy of prediction. Findings show that Reformulation and Assistance account for approximately 45 percent of all query reformulations. Searchers seem to seek system searching assistant early in the session or after a content change. The results of our evaluations show that the first and second order models provided the best predictability, between 28 and 40 percent overall, and higher than 70 percent for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance in real time.
Resumo:
Suggestions that peripheral imagery may affect the development of refractive error have led to interest in the variation in refraction and aberration across the visual field. It is shown that, if the optical system of the eye is rotationally symmetric about an optical axis which does not coincide with the visual axis, measurements of refraction and aberration made along the horizontal and vertical meridians of the visual field will show asymmetry about the visual axis. The departures from symmetry are modelled for second-order aberrations, refractive components and third-order coma. These theoretical results are compared with practical measurements from the literature. The experimental data support the concept that departures from symmetry about the visual axis in the measurements of crossed-cylinder astigmatism J45 and J180 are largely explicable in terms of a decentred optical axis. Measurements of the mean sphere M suggest, however, that the retinal curvature must differ in the horizontal and vertical meridians.
Resumo:
A common optometric problem is to specify the eye’s ocular aberrations in terms of Zernike coefficients and to reduce that specification to a prescription for the optimum sphero-cylindrical correcting lens. The typical approach is first to reconstruct wavefront phase errors from measurements of wavefront slopes obtained by a wavefront aberrometer. This paper applies a new method to this clinical problem that does not require wavefront reconstruction. Instead, we base our analysis of axial wavefront vergence as inferred directly from wavefront slopes. The result is a wavefront vergence map that is similar to the axial power maps in corneal topography and hence has a potential to be favoured by clinicians. We use our new set of orthogonal Zernike slope polynomials to systematically analyse details of the vergence map analogous to Zernike analysis of wavefront maps. The result is a vector of slope coefficients that describe fundamental aberration components. Three different methods for reducing slope coefficients to a spherocylindrical prescription in power vector forms are compared and contrasted. When the original wavefront contains only second order aberrations, the vergence map is a function of meridian only and the power vectors from all three methods are identical. The differences in the methods begin to appear as we include higher order aberrations, in which case the wavefront vergence map is more complicated. Finally, we discuss the advantages and limitations of vergence map representation of ocular aberrations.
Resumo:
On-axis monochromatic higher-order aberrations increase with age. Few studies have been made of peripheral refraction along the horizontal meridian of older eyes, and none of their off-axis higher-order aberrations. We measured wave aberrations over the central 42°x32° visual field for a 5mm pupil in 10 young and 7 older emmetropes. Patterns of peripheral refraction were similar in the two groups. Coma increased linearly with field angle at a significantly higher rate in older than in young emmetropes (−0.018±0.007 versus −0.006±0.002 µm/deg). Spherical aberration was almost constant over the measured field in both age groups and mean values across the field were significantly higher in older than in young emmetropes (+0.08±0.05 versus +0.02±0.04 µm). Total root-mean-square and higher-order aberrations increased more rapidly with field angle in the older emmetropes. However, the limits to monochromatic peripheral retinal image quality are largely determined by the second-order aberrations, which do not change markedly with age, and under normal conditions the relative importance of the increased higher-order aberrations in older eyes is lessened by the reduction in pupil diameter with age. Therefore it is unlikely that peripheral visual performance deficits observed in normal older individuals are primarily attributable to the increased impact of higher-order aberration.
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
While it is commonly accepted that computability on a Turing machine in polynomial time represents a correct formalization of the notion of a feasibly computable function, there is no similar agreement on how to extend this notion on functionals, that is, what functionals should be considered feasible. One possible paradigm was introduced by Mehlhorn, who extended Cobham's definition of feasible functions to type 2 functionals. Subsequently, this class of functionals (with inessential changes of the definition) was studied by Townsend who calls this class POLY, and by Kapron and Cook who call the same class basic feasible functionals. Kapron and Cook gave an oracle Turing machine model characterisation of this class. In this article, we demonstrate that the class of basic feasible functionals has recursion theoretic properties which naturally generalise the corresponding properties of the class of feasible functions, thus giving further evidence that the notion of feasibility of functionals mentioned above is correctly chosen. We also improve the Kapron and Cook result on machine representation.Our proofs are based on essential applications of logic. We introduce a weak fragment of second order arithmetic with second order variables ranging over functions from NN which suitably characterises basic feasible functionals, and show that it is a useful tool for investigating the properties of basic feasible functionals. In particular, we provide an example how one can extract feasible programs from mathematical proofs that use nonfeasible functions.
Resumo:
The topic of the present work is to study the relationship between the power of the learning algorithms on the one hand, and the expressive power of the logical language which is used to represent the problems to be learned on the other hand. The central question is whether enriching the language results in more learning power. In order to make the question relevant and nontrivial, it is required that both texts (sequences of data) and hypotheses (guesses) be translatable from the “rich” language into the “poor” one. The issue is considered for several logical languages suitable to describe structures whose domain is the set of natural numbers. It is shown that enriching the language does not give any advantage for those languages which define a monadic second-order language being decidable in the following sense: there is a fixed interpretation in the structure of natural numbers such that the set of sentences of this extended language true in that structure is decidable. But enriching the original language even by only one constant gives an advantage if this language contains a binary function symbol (which will be interpreted as addition). Furthermore, it is shown that behaviourally correct learning has exactly the same power as learning in the limit for those languages which define a monadic second-order language with the property given above, but has more power in case of languages containing a binary function symbol. Adding the natural requirement that the set of all structures to be learned is recursively enumerable, it is shown that it pays o6 to enrich the language of arithmetics for both finite learning and learning in the limit, but it does not pay off to enrich the language for behaviourally correct learning.
Resumo:
Since its initial proposal in 1998, alkaline hydrothermal processing has rapidly become an established technology for the production of titanate nanostructures. This simple, highly reproducible process has gained a strong research following since its conception. However, complete understanding and elucidation of nanostructure phase and formation have not yet been achieved. Without fully understanding phase, formation, and other important competing effects of the synthesis parameters on the final structure, the maximum potential of these nanostructures cannot be obtained. Therefore this study examined the influence of synthesis parameters on the formation of titanate nanostructures produced by alkaline hydrothermal treatment. The parameters included alkaline concentration, hydrothermal temperature, the precursor material‘s crystallite size and also the phase of the titanium dioxide precursor (TiO2, or titania). The nanostructure‘s phase and morphology was analysed using X-ray diffraction (XRD), Raman spectroscopy and transmission electron microscopy. X-ray photoelectron spectroscopy (XPS), dynamic light scattering (non-invasive backscattering), nitrogen sorption, and Rietveld analysis were used to determine phase, for particle sizing, surface area determinations, and establishing phase concentrations, respectively. This project rigorously examined the effect of alkaline concentration and hydrothermal temperature on three commercially sourced and two self-prepared TiO2 powders. These precursors consisted of both pure- or mixed-phase anatase and rutile polymorphs, and were selected to cover a range of phase concentrations and crystallite sizes. Typically, these precursors were treated with 5–10 M sodium hydroxide (NaOH) solutions at temperatures between 100–220 °C. Both nanotube and nanoribbon morphologies could be produced depending on the combination of these hydrothermal conditions. Both titania and titanate phases are comprised of TiO6 units which are assembled in different combinations. The arrangement of these atoms affects the binding energy between the Ti–O bonds. Raman spectroscopy and XPS were therefore employed in a preliminary study of phase determination for these materials. The change in binding energy from a titania to a titanate binding energy was investigated in this study, and the transformation of titania precursor into nanotubes and titanate nanoribbons was directly observed by these methods. Evaluation of the Raman and XPS results indicated a strengthening in the binding energies of both the Ti (2p3/2) and O (1s) bands which correlated to an increase in strength and decrease in resolution of the characteristic nanotube doublet observed between 320 and 220 cm.1 in the Raman spectra of these products. The effect of phase and crystallite size on nanotube formation was examined over a series of temperatures (100.200 �‹C in 20 �‹C increments) at a set alkaline concentration (7.5 M NaOH). These parameters were investigated by employing both pure- and mixed- phase precursors of anatase and rutile. This study indicated that both the crystallite size and phase affect nanotube formation, with rutile requiring a greater driving force (essentially �\harsher. hydrothermal conditions) than anatase to form nanotubes, where larger crystallites forms of the precursor also appeared to impede nanotube formation slightly. These parameters were further examined in later studies. The influence of alkaline concentration and hydrothermal temperature were systematically examined for the transformation of Degussa P25 into nanotubes and nanoribbons, and exact conditions for nanostructure synthesis were determined. Correlation of these data sets resulted in the construction of a morphological phase diagram, which is an effective reference for nanostructure formation. This morphological phase diagram effectively provides a .recipe book�e for the formation of titanate nanostructures. Morphological phase diagrams were also constructed for larger, near phase-pure anatase and rutile precursors, to further investigate the influence of hydrothermal reaction parameters on the formation of titanate nanotubes and nanoribbons. The effects of alkaline concentration, hydrothermal temperature, crystallite phase and size are observed when the three morphological phase diagrams are compared. Through the analysis of these results it was determined that alkaline concentration and hydrothermal temperature affect nanotube and nanoribbon formation independently through a complex relationship, where nanotubes are primarily affected by temperature, whilst nanoribbons are strongly influenced by alkaline concentration. Crystallite size and phase also affected the nanostructure formation. Smaller precursor crystallites formed nanostructures at reduced hydrothermal temperature, and rutile displayed a slower rate of precursor consumption compared to anatase, with incomplete conversion observed for most hydrothermal conditions. The incomplete conversion of rutile into nanotubes was examined in detail in the final study. This study selectively examined the kinetics of precursor dissolution in order to understand why rutile incompletely converted. This was achieved by selecting a single hydrothermal condition (9 M NaOH, 160 °C) where nanotubes are known to form from both anatase and rutile, where the synthesis was quenched after 2, 4, 8, 16 and 32 hours. The influence of precursor phase on nanostructure formation was explicitly determined to be due to different dissolution kinetics; where anatase exhibited zero-order dissolution and rutile second-order. This difference in kinetic order cannot be simply explained by the variation in crystallite size, as the inherent surface areas of the two precursors were determined to have first-order relationships with time. Therefore, the crystallite size (and inherent surface area) does not affect the overall kinetic order of dissolution; rather, it determines the rate of reaction. Finally, nanostructure formation was found to be controlled by the availability of dissolved titanium (Ti4+) species in solution, which is mediated by the dissolution kinetics of the precursor.
Resumo:
Different from conventional methods for structural reliability evaluation, such as, first/second-order reliability methods (FORM/SORM) or Monte Carlo simulation based on corresponding limit state functions, a novel approach based on dynamic objective oriented Bayesian network (DOOBN) for prediction of structural reliability of a steel bridge element has been proposed in this paper. The DOOBN approach can effectively model the deterioration processes of a steel bridge element and predict their structural reliability over time. This approach is also able to achieve Bayesian updating with observed information from measurements, monitoring and visual inspection. Moreover, the computational capacity embedded in the approach can be used to facilitate integrated management and maintenance optimization in a bridge system. A steel bridge girder is used to validate the proposed approach. The predicted results are compared with those evaluated by FORM method.
Resumo:
This thesis conceptualises Use for IS (Information Systems) success. While Use in this study describes the extent to which an IS is incorporated into the user’s processes or tasks, success of an IS is the measure of the degree to which the person using the system is better off. For IS success, the conceptualisation of Use offers new perspectives on describing and measuring Use. We test the philosophies of the conceptualisation using empirical evidence in an Enterprise Systems (ES) context. Results from the empirical analysis contribute insights to the existing body of knowledge on the role of Use and demonstrate Use as an important factor and measure of IS success. System Use is a central theme in IS research. For instance, Use is regarded as an important dimension of IS success. Despite its recognition, the Use dimension of IS success reportedly suffers from an all too simplistic definition, misconception, poor specification of its complex nature, and an inadequacy of measurement approaches (Bokhari 2005; DeLone and McLean 2003; Zigurs 1993). Given the above, Burton-Jones and Straub (2006) urge scholars to revisit the concept of system Use, consider a stronger theoretical treatment, and submit the construct to further validation in its intended nomological net. On those considerations, this study re-conceptualises Use for IS success. The new conceptualisation adopts a work-process system-centric lens and draws upon the characteristics of modern system types, key user groups and their information needs, and the incorporation of IS in work processes. With these characteristics, the definition of Use and how it may be measured is systematically established. Use is conceptualised as a second-order measurement construct determined by three sub-dimensions: attitude of its users, depth, and amount of Use. The construct is positioned in a modified IS success research model, in an attempt to demonstrate its central role in determining IS success in an ES setting. A two-stage mixed-methods research design—incorporating a sequential explanatory strategy—was adopted to collect empirical data and to test the research model. The first empirical investigation involved an experiment and a survey of ES end users at a leading tertiary education institute in Australia. The second, a qualitative investigation, involved a series of interviews with real-world operational managers in large Indian private-sector companies to canvass their day-to-day experiences with ES. The research strategy adopted has a stronger quantitative leaning. The survey analysis results demonstrate the aptness of Use as an antecedent and a consequence of IS success, and furthermore, as a mediator between the quality of IS and the impacts of IS on individuals. Qualitative data analysis on the other hand, is used to derive a framework for classifying the diversity of ES Use behaviour. The qualitative results establish that workers Use IS in their context to orientate, negotiate, or innovate. The implications are twofold. For research, this study contributes to cumulative IS success knowledge an approach for defining, contextualising, measuring, and validating Use. For practice, research findings not only provide insights for educators when incorporating ES for higher education, but also demonstrate how operational managers incorporate ES into their work practices. Research findings leave the way open for future, larger-scale research into how industry practitioners interact with an ES to complete their work in varied organisational environments.
Resumo:
The aim of this study was to apply the principles of content, criterion, and construct validation to a new questionnaire specifically designed to measure foot-health status. One hundred eleven subjects completed two different questionnaires designed to measure foot health (the new Foot Health Status Questionnaire and the previously validated Foot Function Index) and underwent a clinical examination in order to provide data for a second-order confirmatory factor analysis. Presented herein is a psychometrically evaluated questionnaire that contains 13 items covering foot pain, foot function, footwear, and general foot health. The tool demonstrates a high degree of content, criterion, and construct validity and test-retest reliability.
Resumo:
This study is conducted within the IS-Impact Research Track at Queensland University of Technology (QUT). The goal of the IS-Impact Track is, “to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice” (Gable et al, 2006). IS-Impact is defined as “a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups” (Gable Sedera and Chan, 2008). Track efforts have yielded the bicameral IS-Impact measurement model; the “impact” half includes Organizational-Impact and Individual-Impact dimensions; the “quality” half includes System-Quality and Information-Quality dimensions. The IS-Impact model, by design, is intended to be robust, simple and generalizable, to yield results that are comparable across time, stakeholders, different systems and system contexts. The model and measurement approach employ perceptual measures and an instrument that is relevant to key stakeholder groups, thereby enabling the combination or comparison of stakeholder perspectives. Such a validated and widely accepted IS-Impact measurement model has both academic and practical value. It facilitates systematic operationalization of a main dependent variable in research (IS-Impact), which can also serve as an important independent variable. For IS management practice it provides a means to benchmark and track the performance of information systems in use. The objective of this study is to develop a Mandarin version IS-Impact model, encompassing a list of China-specific IS-Impact measures, aiding in a better understanding of the IS-Impact phenomenon in a Chinese organizational context. The IS-Impact model provides a much needed theoretical guidance for this investigation of ES and ES impacts in a Chinese context. The appropriateness and soundness of employing the IS-Impact model as a theoretical foundation are evident: the model originated from a sound theory of IS Success (1992), developed through rigorous validation, and also derived in the context of Enterprise Systems. Based on the IS-Impact model, this study investigates a number of research questions (RQs). Firstly, the research investigated what essential impacts have been derived from ES by Chinese users and organizations [RQ1]. Secondly, we investigate which salient quality features of ES are perceived by Chinese users [RQ2]. Thirdly, we seek to answer whether the quality and impacts measures are sufficient to assess ES-success in general [RQ3]. Lastly, the study attempts to address whether the IS-Impact measurement model is appropriate for Chinese organizations in terms of evaluating their ES [RQ4]. An open-ended, qualitative identification survey was employed in the study. A large body of short text data was gathered from 144 Chinese users and 633 valid IS-Impact statements were generated from the data set. A generally inductive approach was applied in the qualitative data analysis. Rigorous qualitative data coding resulted in 50 first-order categories with 6 second-order categories that were grounded from the context of Chinese organization. The six second-order categories are: 1) System Quality; 2) Information Quality; 3) Individual Impacts;4) Organizational Impacts; 5) User Quality and 6) IS Support Quality. The final research finding of the study is the contextualized Mandarin version IS-Impact measurement model that includes 38 measures organized into 4 dimensions: System Quality, information Quality, Individual Impacts and Organizational Impacts. The study also proposed two conceptual models to harmonize the IS-Impact model and the two emergent constructs – User Quality and IS Support Quality by drawing on previous IS effectiveness literatures and the Work System theory proposed by Alter (1999) respectively. The study is significant as it is the first effort that empirically and comprehensively investigates IS-Impact in China. Specifically, the research contributions can be classified into theoretical contributions and practical contributions. From the theoretical perspective, through qualitative evidence, the study test and consolidate IS-Impact measurement model in terms of the quality of robustness, completeness and generalizability. The unconventional research design exhibits creativity of the study. The theoretical model does not work as a top-down a priori seeking for evidence demonstrating its credibility; rather, the study allows a competitive model to emerge from the bottom-up and open-coding analysis. Besides, the study is an example extending and localizing pre-existing theory developed in Western context when the theory is introduced to a different context. On the other hand, from the practical perspective, It is first time to introduce prominent research findings in field of IS Success to Chinese academia and practitioner. This study provides a guideline for Chinese organizations to assess their Enterprise System, and leveraging IT investment in the future. As a research effort in ITPS track, this study contributes the research team with an alternative operationalization of the dependent variable. The future research can take on the contextualized Mandarin version IS-Impact framework as a theoretical a priori model, further quantitative and empirical testing its validity.