875 resultados para Theory and Algorithms
Resumo:
Cognitive complexity and control theory and relational complexity theory attribute developmental changes in theory of mind (TOM) to complexity. In 3 studies, 3-, 4-, and 5-year-olds performed TOM tasks (false belief, appearance-reality), less complex connections (Level 1 perspective-taking) tasks, and transformations tasks (understanding the effects of location changes and colored filters) with content similar to TOM. There were also predictor tasks at binary-relational and ternary-relational complexity levels, with different content. Consistent with complexity theories: (a) connections and transformations were easier and mastered earlier than TOM; (b) predictor tasks accounted for more than 80% of age-related variance in TOM; and (c) ternary-relational items accounted for TOM variance, before and after controlling for age and binary-relational items. Prediction did not require hierarchically structured predictor tasks.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
We present a new approach accounting for the nonadditivity of attractive parts of solid-fluid and fluidfluid potentials to improve the quality of the description of nitrogen and argon adsorption isotherms on graphitized carbon black in the framework of non-local density functional theory. We show that the strong solid-fluid interaction in the first monolayer decreases the fluid-fluid interaction, which prevents the twodimensional phase transition to occur. This results in smoother isotherm, which agrees much better with experimental data. In the region of multi-layer coverage the conventional non-local density functional theory and grand canonical Monte Carlo simulations are known to over-predict the amount adsorbed against experimental isotherms. Accounting for the non-additivity factor decreases the solid-fluid interaction with the increase of intermolecular interactions in the dense adsorbed fluid, preventing the over-prediction of loading in the region of multi-layer adsorption. Such an improvement of the non-local density functional theory allows us to describe experimental nitrogen and argon isotherms on carbon black quite accurately with mean error of 2.5 to 5.8% instead of 17 to 26% in the conventional technique. With this approach, the local isotherms of model pores can be derived, and consequently a more reliab * le pore size distribution can be obtained. We illustrate this by applying our theory against nitrogen and argon isotherms on a number of activated carbons. The fitting between our model and the data is much better than the conventional NLDFT, suggesting the more reliable PSD obtained with our approach.
Resumo:
A new approach is developed to analyze the thermodynamic properties of a sub-critical fluid adsorbed in a slit pore of activated carbon. The approach is based on a representation that an adsorbed fluid forms an ordered structure close to a smoothed solid surface. This ordered structure is modelled as a collection of parallel molecular layers. Such a structure allows us to express the Helmholtz free energy of a molecular layer as the sum of the intrinsic Helmholtz free energy specific to that layer and the potential energy of interaction of that layer with all other layers and the solid surface. The intrinsic Helmholtz free energy of a molecular layer is a function (at given temperature) of its two-dimensional density and it can be readily obtained from bulk-phase properties, while the interlayer potential energy interaction is determined by using the 10-4 Lennard-Jones potential. The positions of all layers close to the graphite surface or in a slit pore are considered to correspond to the minimum of the potential energy of the system. This model has led to accurate predictions of nitrogen and argon adsorption on carbon black at their normal boiling points. In the case of adsorption in slit pores, local isotherms are determined from the minimization of the grand potential. The model provides a reasonable description of the 0-1 monolayer transition, phase transition and packing effect. The adsorption of nitrogen at 77.35 K and argon at 87.29 K on activated carbons is analyzed to illustrate the potential of this theory, and the derived pore-size distribution is compared favourably with that obtained by the Density Functional Theory (DFT). The model is less time-consuming than methods such as the DFT and Monte-Carlo simulation, and most importantly it can be readily extended to the adsorption of mixtures and capillary condensation phenomena.
Resumo:
This paper investigates the performance of EASI algorithm and the proposed EKENS algorithm for linear and nonlinear mixtures. The proposed EKENS algorithm is based on the modified equivariant algorithm and kernel density estimation. Theory and characteristic of both the algorithms are discussed for blind source separation model. The separation structure of nonlinear mixtures is based on a nonlinear stage followed by a linear stage. Simulations with artificial and natural data demonstrate the feasibility and good performance of the proposed EKENS algorithm.
Resumo:
Finding single pair shortest paths on surface is a fundamental problem in various domains, like Geographic Information Systems (GIS) 3D applications, robotic path planning system, and surface nearest neighbor query in spatial database, etc. Currently, to solve the problem, existing algorithms must traverse the entire polyhedral surface. With the rapid advance in areas like Global Positioning System (CPS), Computer Aided Design (CAD) systems and laser range scanner, surface models axe becoming more and more complex. It is not uncommon that a surface model contains millions of polygons. The single pair shortest path problem is getting harder and harder to solve. Based on the observation that the single pair shortest path is in the locality, we propose in this paper efficient methods by excluding part of the surface model without considering them in the search process. Three novel expansion-based algorithms are proposed, namely, Naive algorithm, Rectangle-based Algorithm and Ellipse-based Algorithm. Each algorithm uses a two-step approach to find the shortest path. (1) compute an initial local path. (2) use the value of this initial path to select a search region, in which the global shortest path exists. The search process terminates once the global optimum criteria are satisfied. By reducing the searching region, the performance is improved dramatically in most cases.
Resumo:
In this thesis work we develop a new generative model of social networks belonging to the family of Time Varying Networks. The importance of correctly modelling the mechanisms shaping the growth of a network and the dynamics of the edges activation and inactivation are of central importance in network science. Indeed, by means of generative models that mimic the real-world dynamics of contacts in social networks it is possible to forecast the outcome of an epidemic process, optimize the immunization campaign or optimally spread an information among individuals. This task can now be tackled taking advantage of the recent availability of large-scale, high-quality and time-resolved datasets. This wealth of digital data has allowed to deepen our understanding of the structure and properties of many real-world networks. Moreover, the empirical evidence of a temporal dimension in networks prompted the switch of paradigm from a static representation of graphs to a time varying one. In this work we exploit the Activity-Driven paradigm (a modeling tool belonging to the family of Time-Varying-Networks) to develop a general dynamical model that encodes fundamental mechanism shaping the social networks' topology and its temporal structure: social capital allocation and burstiness. The former accounts for the fact that individuals does not randomly invest their time and social interactions but they rather allocate it toward already known nodes of the network. The latter accounts for the heavy-tailed distributions of the inter-event time in social networks. We then empirically measure the properties of these two mechanisms from seven real-world datasets and develop a data-driven model, analytically solving it. We then check the results against numerical simulations and test our predictions with real-world datasets, finding a good agreement between the two. Moreover, we find and characterize a non-trivial interplay between burstiness and social capital allocation in the parameters phase space. Finally, we present a novel approach to the development of a complete generative model of Time-Varying-Networks. This model is inspired by the Kaufman's adjacent possible theory and is based on a generalized version of the Polya's urn. Remarkably, most of the complex and heterogeneous feature of real-world social networks are naturally reproduced by this dynamical model, together with many high-order topological properties (clustering coefficient, community structure etc.).
Resumo:
In “The English Patient: English Grammar and teaching in the Twentieth Century”, Hudson and Walmsley (2005) contens that the decline of grammar in schools was linked to a similar decline in English universities, where no serious research or teaching on English grammar took place. This article argues that such a decline was due not only to a lack of research, but also because it suited educational policies of the time. It applies Bernstein’s theory of pedagogic discourse (1990 & 1996) to the case study of the debate surrounding the introduction of a national curriculum in English in England in the late 1980s and the National Literacy Strategy in the 1990s, to demonstrate the links between academic theory and educational policy.
Resumo:
We derive a mean field algorithm for binary classification with Gaussian processes which is based on the TAP approach originally proposed in Statistical Physics of disordered systems. The theory also yields an approximate leave-one-out estimator for the generalization error which is computed with no extra computational cost. We show that from the TAP approach, it is possible to derive both a simpler 'naive' mean field theory and support vector machines (SVM) as limiting cases. For both mean field algorithms and support vectors machines, simulation results for three small benchmark data sets are presented. They show 1. that one may get state of the art performance by using the leave-one-out estimator for model selection and 2. the built-in leave-one-out estimators are extremely precise when compared to the exact leave-one-out estimate. The latter result is a taken as a strong support for the internal consistency of the mean field approach.
Resumo:
This paper considers the role of opportunism in three contractual theories of the firm: rent-seeking theory, property rights theory, and agency theory. In each case I examine whether it is possible to have a functioning contractual theory of the firm without recourse to opportunism. Without opportunism firms may still exist as a result of issues arising from (incomplete) contracting. Far from posing a problem for the theory of the firm, questioning the role of opportunism and the ubiquity of the hold-up problem helps us understand more about the purpose and functions of contracts which go beyond mere incentive alignment.
Resumo:
The question of what to provide employees in order that they reciprocate with desirable behaviors in the work place has resulted in a great amount of work in the area of social exchange. Although offering fair compensation, including salary or wages and employee benefits, has been extensively studied, the effects of offering specific types of benefits, such as work-life balance benefits, and the intangible rewards that such an offering inadvertently offers, has only been minimally explored. Utilizing past literature, this current research examined the offering of work-life balance benefits, the value employees place on those benefits, the communication of the benefits by the organization to employees, and their effect on employee attitudes and behaviors. The goal was to identify the effect on desirable outcomes when work-life balance benefits are offered to determine the usefulness to the organization of offering such benefits. To test these effects, a study of an organization known to offer a strong work-life balance benefits package was undertaken. This was accomplished through the distribution of questionnaires to identify the possible relationships involving 408 employee respondents and their 79 supervisors. This was followed with interviews of 12 individuals to ascertain the true reasons for links observed through analysis. Analysis of the data was accomplished through correlation analysis, multilevel analysis and regression analysis generated by SPSS. The results of the quantitative analysis showed support for a relationship between the offering of work-life balance benefits and perceived organizational support, perceived distributive justice, job satisfaction and OCBO. The analysis also showed a lack of support for a relationship between the offering of work-life balance benefits and organizational commitment, OCBI and IRB. The interviews offered possible reasons for the lack of support regarding the relationship between the offering of work-life balance benefits and organizational commitment as well as organizational citizenship behaviors (OCBI and IRB). The implications of these findings on future research, theory and practice in the offering of work-life balance benefits are discussed.
Resumo:
This article deals with a number of supply chain management (SCM) issues: SCM’s “Big Idea” – integration, Divergence of Theory and Practice - the limitations of “hard-wiring”, The “Human Chain”, The Way Forward – asking the right question?
Resumo:
Mathematics Subject Classification: 26A33, 93C83, 93C85, 68T40
Resumo:
We present a review of the latest developments in one-dimensional (1D) optical wave turbulence (OWT). Based on an original experimental setup that allows for the implementation of 1D OWT, we are able to show that an inverse cascade occurs through the spontaneous evolution of the nonlinear field up to the point when modulational instability leads to soliton formation. After solitons are formed, further interaction of the solitons among themselves and with incoherent waves leads to a final condensate state dominated by a single strong soliton. Motivated by the observations, we develop a theoretical description, showing that the inverse cascade develops through six-wave interaction, and that this is the basic mechanism of nonlinear wave coupling for 1D OWT. We describe theory, numerics and experimental observations while trying to incorporate all the different aspects into a consistent context. The experimental system is described by two coupled nonlinear equations, which we explore within two wave limits allowing for the expression of the evolution of the complex amplitude in a single dynamical equation. The long-wave limit corresponds to waves with wave numbers smaller than the electrical coherence length of the liquid crystal, and the opposite limit, when wave numbers are larger. We show that both of these systems are of a dual cascade type, analogous to two-dimensional (2D) turbulence, which can be described by wave turbulence (WT) theory, and conclude that the cascades are induced by a six-wave resonant interaction process. WT theory predicts several stationary solutions (non-equilibrium and thermodynamic) to both the long- and short-wave systems, and we investigate the necessary conditions required for their realization. Interestingly, the long-wave system is close to the integrable 1D nonlinear Schrödinger equation (NLSE) (which contains exact nonlinear soliton solutions), and as a result during the inverse cascade, nonlinearity of the system at low wave numbers becomes strong. Subsequently, due to the focusing nature of the nonlinearity, this leads to modulational instability (MI) of the condensate and the formation of solitons. Finally, with the aid of the probability density function (PDF) description of WT theory, we explain the coexistence and mutual interactions between solitons and the weakly nonlinear random wave background in the form of a wave turbulence life cycle (WTLC).
Resumo:
A szerző röviden összefoglalja a származtatott termékek árazásával kapcsolatos legfontosabb ismereteket és problémákat. A derivatív árazás elmélete a piacon levő termékek közötti redundanciát kihasználva próbálja meghatározni az egyes termékek relatív árát. Ezt azonban csak teljes piacon lehet megtenni, és így csak teljes piac esetén lehetséges a hasznossági függvények fogalmát az elméletből és a ráépülő gyakorlatból elhagyni, ezért a kockázatsemleges árazás elve félrevezető. Másképpen fogalmazva: a származtatott termékek elmélete csak azon az áron képes a hasznossági függvény fogalmától megszabadulni, ha a piac szerkezetére a valóságban nem teljesülő megkötéseket tesz. Ennek hangsúlyozása mind a piaci gyakorlatban, mind az oktatásban elengedhetetlen. / === / The author sums up briefly the main aspects and problems to do with the pricing of derived products. The theory of derivative pricing uses the redundancy among products on the market to arrive at relative product prices. But this can be done only on a complete market, so that only with a complete market does it become possible to omit from the theory and the practice built upon it the concept of utility functions, and for that reason the principle of risk-neutral pricing is misleading. To put it another way, the theory of derived products is capable of freeing itself from the concept of utility functions only at a price where in practice it places impossible restrictions on the market structure. This it is essential to emphasize in market practice and in teaching.