235 resultados para Third-order model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Animal models of refractive error development have demonstrated that visual experience influences ocular growth. In a variety of species, axial anisometropia (i.e. a difference in the length of the two eyes) can be induced through unilateral occlusion, image degradation or optical manipulation. In humans, anisometropia may occur in isolation or in association with amblyopia, strabismus or unilateral pathology. Non-amblyopic myopic anisometropia represents an interesting anomaly of ocular growth, since the two eyes within one visual system have grown to different endpoints. These experiments have investigated a range of biometric, optical and mechanical properties of anisometropic eyes (with and without amblyopia) with the aim of improving our current understanding of asymmetric refractive error development. In the first experiment, the interocular symmetry in 34 non-amblyopic myopic anisometropes (31 Asian, 3 Caucasian) was examined during relaxed accommodation. A high degree of symmetry was observed between the fellow eyes for a range of optical, biometric and biomechanical measurements. When the magnitude of anisometropia exceeded 1.75 D, the more myopic eye was almost always the sighting dominant eye. Further analysis of the optical and biometric properties of the dominant and non-dominant eyes was conducted to determine any related factors but no significant interocular differences were observed with respect to best-corrected visual acuity, corneal or total ocular aberrations during relaxed accommodation. Given the high degree of symmetry observed between the fellow eyes during distance viewing in the first experiment and the strong association previously reported between near work and myopia development, the aim of the second experiment was to investigate the symmetry between the fellow eyes of the same 34 myopic anisometropes following a period of near work. Symmetrical changes in corneal and total ocular aberrations were observed following a short reading task (10 minutes, 2.5 D accommodation demand) which was attributed to the high degree of interocular symmetry for measures of anterior eye morphology, and corneal biomechanics. These changes were related to eyelid shape and position during downward gaze, but gave no clear indication of factors associated with near work that might cause asymmetric eye growth within an individual. Since the influence of near work on eye growth is likely to be most obvious during, rather than following near tasks, in the third experiment the interocular symmetry of the optical and biometric changes was examined during accommodation for 11 myopic anisometropes. The changes in anterior eye biometrics associated with accommodation were again similar between the eyes, resulting in symmetrical changes in the optical characteristics. However, the more myopic eyes exhibited slightly greater amounts of axial elongation during accommodation which may be related to the force exerted by the ciliary muscle. This small asymmetry in axial elongation we observed between the eyes may be due to interocular differences in posterior eye structure, given that the accommodative response was equal between eyes. Using ocular coherence tomography a reduced average choroidal thickness was observed in the more myopic eyes compared to the less myopic eyes of these subjects. The interocular difference in choroidal thickness was correlated with the magnitude of spherical equivalent and axial anisometropia. The symmetry in optics and biometrics between fellow eyes which have undergone significantly different visual development (i.e. anisometropic subjects with amblyopia) is also of interest with respect to refractive error development. In the final experiment the influence of altered visual experience upon corneal and ocular higher-order aberrations was investigated in 21 amblyopic subjects (8 refractive, 11 strabismic and 2 form deprivation). Significant differences in aberrations were observed between the fellow eyes, which varied according to the type of amblyopia. Refractive amblyopes displayed significantly higher levels of 4th order corneal aberrations (spherical aberration and secondary astigmatism) in the amblyopic eye compared to the fellow non-amblyopic eye. Strabismic amblyopes exhibited significantly higher levels of trefoil, a third order aberration, in the amblyopic eye for both corneal and total ocular aberrations. The results of this experiment suggest that asymmetric visual experience during development is associated with asymmetries in higher-order aberrations, proportional to the magnitude of anisometropia and dependent upon the amblyogenic factor. This suggests a direct link between the development of higher-order optical characteristics of the human eye and visual feedback. The results from these experiments have shown that a high degree of symmetry exists between the fellow eyes of non-amblyopic myopic anisometropes for a range of biomechanical, biometric and optical parameters for different levels of accommodation and following near work. While a single specific optical or biomechanical factor that is consistently associated with asymmetric refractive error development has not been identified, the findings from these studies suggest that further research into the association between ocular dominance, choroidal thickness and higher-order aberrations with anisometropia may improve our understanding of refractive error development.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the growing number of XML documents on theWeb it becomes essential to effectively organise these XML documents in order to retrieve useful information from them. A possible solution is to apply clustering on the XML documents to discover knowledge that promotes effective data management, information retrieval and query processing. However, many issues arise in discovering knowledge from these types of semi-structured documents due to their heterogeneity and structural irregularity. Most of the existing research on clustering techniques focuses only on one feature of the XML documents, this being either their structure or their content due to scalability and complexity problems. The knowledge gained in the form of clusters based on the structure or the content is not suitable for reallife datasets. It therefore becomes essential to include both the structure and content of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both these kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. The overall objective of this thesis is to address these issues by: (1) proposing methods to utilise frequent pattern mining techniques to reduce the dimension; (2) developing models to effectively combine the structure and content of XML documents; and (3) utilising the proposed models in clustering. This research first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. A clustering framework with two types of models, implicit and explicit, is developed. The implicit model uses a Vector Space Model (VSM) to combine the structure and the content information. The explicit model uses a higher order model, namely a 3- order Tensor Space Model (TSM), to explicitly combine the structure and the content information. This thesis also proposes a novel incremental technique to decompose largesized tensor models to utilise the decomposed solution for clustering the XML documents. The proposed framework and its components were extensively evaluated on several real-life datasets exhibiting extreme characteristics to understand the usefulness of the proposed framework in real-life situations. Additionally, this research evaluates the outcome of the clustering process on the collection selection problem in the information retrieval on the Wikipedia dataset. The experimental results demonstrate that the proposed frequent pattern mining and clustering methods outperform the related state-of-the-art approaches. In particular, the proposed framework of utilising frequent structures for constraining the content shows an improvement in accuracy over content-only and structure-only clustering results. The scalability evaluation experiments conducted on large scaled datasets clearly show the strengths of the proposed methods over state-of-the-art methods. In particular, this thesis work contributes to effectively combining the structure and the content of XML documents for clustering, in order to improve the accuracy of the clustering solution. In addition, it also contributes by addressing the research gaps in frequent pattern mining to generate efficient and concise frequent subtrees with various node relationships that could be used in clustering.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Magnetic zeolite NaA with different Fe3O4 loadings was prepared by hydrothermal synthesis based on metakaolin and Fe3O4. The effect of added Fe3O4 on the removal of ammonium by zeolite NaA was investigated by varying the Fe3O4 loading, pH, adsorption temperature, initial concentration, adsorption time. Langmuir, Freundlich, and pseudo-second-order modeling were used to describe the nature and mechanism of ammonium ion exchange using both zeolite and magnetic zeolite. Thermodynamic parameters such as change in Gibbs free energy, enthalpy and entropy were calculated. The results show that all the selected factors affect the ammonium ion exchange by zeolite and magnetic zeolite, however, the added Fe3O4 apparently does not affect the ion exchange performance of zeolite to the ammonium ion. Freundlich model provides a better description of the adsorption process than Langmuir model. Moreover, kinetic analysis indicates the exchange of ammonium on the two materials follows a pseudosecond-order model. Thermodynamic analysis makes it clear that the adsorption process of ammonium is spontaneous and exothermic. Regardless of kinetic or thermodynamic analysis, all the results suggest that no considerable effect on the adsorption of the ammonium ion by zeolite is found after the addition of Fe3O4. According to the results, magnetic zeolite NaA can be used for the removal of ammonium due to the good adsorption performance and easy separation method from aqueous solution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research project frames an emerging field – fashion curation – through a theoretical, historical, and practical enquiry. Recent decades have seen fashion curation grow rapidly as a form of praxis and an area of academic attention, predominantly in museums and universities. Within this context, two major models for conceptualising the role of the fashion curator have emerged: the institutional and the independent curator. This project proposes and applies a third model: the adjunct fashion curator. In developing this model my project seeks to move the growing dialogue around fashion curation away from exclusively focusing on the museum. By proposing a third curatorial model for fashion, this research draws on the past of fashion display and exhibition for its context, while simultaneously exploring the adjunct model through my curatorial practice. The impact of sites of display, the role of gender, and the relationship between art and fashion are explored as pivotal themes in the development of fashion curation and thus provide contextual grounding for the proposal of the adjunct curatorial model. Alongside a theoretical and historical account of fashion curation, I conduct a practice-led inquiry that explores these themes through five exhibition projects and one photographic series. I argue that the introduction and application of the adjunct model enables curatorial practitioners to sensitively work around the dominant museum model, and circumvent the divide between institutional and independent curation. Introducing the adjunct model allows the curator to develop personalised narratives relating to the experience of fashion and clothing as an exhibited phenomenon in a variety of institutional and non-institutional sites. Hence this research project contributes to a developing field by proposing a valuable and nuanced model for fashion curation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The nonlinear stability analysis introduced by Chen and Haughton [1] is employed to study the full nonlinear stability of the non-homogeneous spherically symmetric deformation of an elastic thick-walled sphere. The shell is composed of an arbitrary homogeneous, incompressible elastic material. The stability criterion ultimately requires the solution of a third-order nonlinear ordinary differential equation. Numerical calculations performed for a wide variety of well-known incompressible materials are then compared with existing bifurcation results and are found to be identical. Further analysis and comparison between stability and bifurcation are conducted for the case of thin shells and we prove by direct calculation that the two criteria are identical for all modes and all materials.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The theoretical analysis of the bistability associated with the excitation of surface magnetoplasma waves (SWs) propagating across an external magnetic field at the semiconductor-metal interface by the attenuated total reflection (ATR) method is presented. The Kretschmann-Raether configuration of the ATR method is considered, i.e. a plane electromagnetic wave is incident onto a metal surface through a coupling prism. The third-order nonlinearity of the semiconductor medium is considered in the general form using the formalism of the third-order nonlinear susceptibilities and of the perturbation theory. The examples of the nonlinear mechanisms which influence the SW propagation are given. The analytical and numerical analyses show that the realization of bistable regimes of the SW excitation is possible. The SW amplitude values providing bistability in the structure are evaluated and are reasonably low to provide the experimental observation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This appendix describes the Order Fulfillment process followed by a fictitious company named Genko Oil. The process is freely inspired by the VICS (Voluntary Inter-industry Commerce Solutions) reference model1 and provides a demonstration of YAWL’s capabilities in modelling complex control-flow, data and resourcing requirements.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Biased estimation has the advantage of reducing the mean squared error (MSE) of an estimator. The question of interest is how biased estimation affects model selection. In this paper, we introduce biased estimation to a range of model selection criteria. Specifically, we analyze the performance of the minimum description length (MDL) criterion based on biased and unbiased estimation and compare it against modern model selection criteria such as Kay's conditional model order estimator (CME), the bootstrap and the more recently proposed hook-and-loop resampling based model selection. The advantages and limitations of the considered techniques are discussed. The results indicate that, in some cases, biased estimators can slightly improve the selection of the correct model. We also give an example for which the CME with an unbiased estimator fails, but could regain its power when a biased estimator is used.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The significant challenge faced by government in demonstrating value for money in the delivery of major infrastructure resolves around estimating costs and benefits of alternative modes of procurement. Faced with this challenge, one approach is to focus on a dominant performance outcome visible on the opening day of the asset, as the means to select the procurement approach. In this case, value for money becomes a largely nominal concept and determined by selected procurement mode delivering, or not delivering, the selected performance outcome, and notwithstanding possible under delivery on other desirable performance outcomes, as well as possibly incurring excessive transaction costs. This paper proposes a mind-set change in this particular practice, to an approach in which the analysis commences with the conditions pertaining to the project and proceeds to deploy transaction cost and production cost theory to indicate a procurement approach that can claim superior value for money relative to other competing procurement modes. This approach to delivering value for money in relative terms is developed in a first-order procurement decision making model outlined in this paper. The model developed could be complementary to the Public Sector Comparator (PSC) in terms of cross validation and the model more readily lends itself to public dissemination. As a possible alternative to the PSC, the model could save time and money in preparation of project details to lesser extent than that required in the reference project and may send a stronger signal to the market that may encourage more innovation and competition.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Corneal-height data are typically measured with videokeratoscopes and modeled using a set of orthogonal Zernike polynomials. We address the estimation of the number of Zernike polynomials, which is formalized as a model-order selection problem in linear regression. Classical information-theoretic criteria tend to overestimate the corneal surface due to the weakness of their penalty functions, while bootstrap-based techniques tend to underestimate the surface or require extensive processing. In this paper, we propose to use the efficient detection criterion (EDC), which has the same general form of information-theoretic-based criteria, as an alternative to estimating the optimal number of Zernike polynomials. We first show, via simulations, that the EDC outperforms a large number of information-theoretic criteria and resampling-based techniques. We then illustrate that using the EDC for real corneas results in models that are in closer agreement with clinical expectations and provides means for distinguishing normal corneal surfaces from astigmatic and keratoconic surfaces.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Given global demand for new infrastructure, governments face substantial challenges in funding new infrastructure and simultaneously delivering Value for Money (VfM). The paper begins with an update on a key development in a new early/first-order procurement decision making model that deploys production cost/benefit theory and theories concerning transaction costs from the New Institutional Economics, in order to identify a procurement mode that is likely to deliver the best ratio of production costs and transaction costs to production benefits, and therefore deliver superior VfM relative to alternative procurement modes. In doing so, the new procurement model is also able to address the uncertainty concerning the relative merits of Public-Private Partnerships (PPP) and non-PPP procurement approaches. The main aim of the paper is to develop competition as a dependent variable/proxy for VfM and a hypothesis (overarching proposition), as well as developing a research method to test the new procurement model. Competition reflects both production costs and benefits (absolute level of competition) and transaction costs (level of realised competition) and is a key proxy for VfM. Using competition as a proxy for VfM, the overarching proposition is given as: When the actual procurement mode matches the predicted (theoretical) procurement mode (informed by the new procurement model), then actual competition is expected to match potential competition (based on actual capacity). To collect data to test this proposition, the research method that is developed in this paper combines a survey and case study approach. More specifically, data collection instruments for the surveys to collect data on actual procurement, actual competition and potential competition are outlined. Finally, plans for analysing this survey data are briefly mentioned, along with noting the planned use of analytical pattern matching in deploying the new procurement model and in order to develop the predicted (theoretical) procurement mode.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Given global demand for new infrastructure, governments face substantial challenges in funding new infrastructure and simultaneously delivering Value for Money (VfM). As background to this challenge, a brief review is given of current practice in the selection of major public sector infrastructure in Australia, along with a review of the related literature concerning the Multi-Attribute Utility Approach (MAUA) and the effect of MAUA on the role of risk management in procurement selection. To contribute towards addressing the key weaknesses of MAUA, a new first-order procurement decision making model is mentioned. A brief summary is also given of the research method and hypothesis used to test and develop the new procurement model and which uses competition as the dependent variable and as a proxy for VfM. The hypothesis is given as follows: When the actual procurement mode matches the theoretical/predicted procurement mode (informed by the new procurement model), then actual competition is expected to match optimum competition (based on actual prevailing capacity vis-à-vis the theoretical/predicted procurement mode) and subject to efficient tendering. The aim of this paper is to report on progress towards testing this hypothesis in terms of an analysis of two of the four data components in the hypothesis. That is, actual procurement and actual competition across 87 road and health major public sector projects in Australia. In conclusion, it is noted that the Global Financial Crisis (GFC) has seen a significant increase in competition in public sector major road and health infrastructure and if any imperfections in procurement and/or tendering are discernible, then this would create the opportunity, through the deployment of economic principles embedded in the new procurement model and/or adjustments in tendering, to maintain some of this higher level post-GFC competition throughout the next business cycle/upturn in demand including private sector demand. Finally, the paper previews the next steps in the research with regard to collection and analysis of data concerning theoretical/predicted procurement and optimum competition.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high-order and context-sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high-order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state-of-the-art query language models namely the Relevance Model and the Information Flow model