145 resultados para Contractive constraint
Resumo:
Learning to talk about motion in a second language is very difficult because it involves restructuring deeply entrenched patterns from the first language (Slobin 1996). In this paper we argue that statistical learning (Saffran et al. 1997) can explain why L2 learners are only partially successful in restructuring their second language grammars. We explore to what extent L2 learners make use of two mechanisms of statistical learning, entrenchment and pre-emption (Boyd and Goldberg 2011) to acquire target-like expressions of motion and retreat from overgeneralisation in this domain. Paying attention to the frequency of existing patterns in the input can help learners to adjust the frequency with which they use path and manner verbs in French but is insufficient to acquire the boundary crossing constraint (Slobin and Hoiting 1994) and learn what not to say. We also look at the role of language proficiency and exposure to French in explaining the findings.
Resumo:
A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion for the finite mixture model. Since the constraint on the mixing coefficients of the finite mixture model is on the multinomial manifold, we use the well-known Riemannian trust-region (RTR) algorithm for solving this problem. The first- and second-order Riemannian geometry of the multinomial manifold are derived and utilized in the RTR algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with an accuracy competitive with those of existing kernel density estimators.
Resumo:
Learning low dimensional manifold from highly nonlinear data of high dimensionality has become increasingly important for discovering intrinsic representation that can be utilized for data visualization and preprocessing. The autoencoder is a powerful dimensionality reduction technique based on minimizing reconstruction error, and it has regained popularity because it has been efficiently used for greedy pretraining of deep neural networks. Compared to Neural Network (NN), the superiority of Gaussian Process (GP) has been shown in model inference, optimization and performance. GP has been successfully applied in nonlinear Dimensionality Reduction (DR) algorithms, such as Gaussian Process Latent Variable Model (GPLVM). In this paper we propose the Gaussian Processes Autoencoder Model (GPAM) for dimensionality reduction by extending the classic NN based autoencoder to GP based autoencoder. More interestingly, the novel model can also be viewed as back constrained GPLVM (BC-GPLVM) where the back constraint smooth function is represented by a GP. Experiments verify the performance of the newly proposed model.
Resumo:
Palaeoclimates across Europe for 6000 y BP were estimated from pollen data using the modern pollen analogue technique constrained with lake-level data. The constraint consists of restricting the set of modern pollen samples considered as analogues of the fossil samples to those locations where the implied change in annual precipitation minus evapotranspiration (P–E) is consistent with the regional change in moisture balance as indicated by lakes. An artificial neural network was used for the spatial interpolation of lake-level changes to the pollen sites, and for mapping palaeoclimate anomalies. The climate variables reconstructed were mean temperature of the coldest month (T c ), growing degree days above 5 °C (GDD), moisture availability expressed as the ratio of actual to equilibrium evapotranspiration (α), and P–E. The constraint improved the spatial coherency of the reconstructed palaeoclimate anomalies, especially for P–E. The reconstructions indicate clear spatial and seasonal patterns of Holocene climate change, which can provide a quantitative benchmark for the evaluation of palaeoclimate model simulations. Winter temperatures (T c ) were 1–3 K greater than present in the far N and NE of Europe, but 2–4 K less than present in the Mediterranean region. Summer warmth (GDD) was greater than present in NW Europe (by 400–800 K day at the highest elevations) and in the Alps, but >400 K day less than present at lower elevations in S Europe. P–E was 50–250 mm less than present in NW Europe and the Alps, but α was 10–15% greater than present in S Europe and P–E was 50–200 mm greater than present in S and E Europe.
Resumo:
Configurations of supercooled liquids residing in their local potential minimum (i.e. in their inherent structure, IS) were found to support a non-zero shear stress. This IS stress was attributed to the constraint to the energy minimization imposed by boundary conditions, which keep size and shape of the simulation cell fixed. In this paper we further investigate the influence of these boundary conditions on the IS stress. We investigate its importance for the computation of the low frequency shear modulus of a glass obtaining a consistent picture for the low- and high frequency shear moduli over the full temperature range. Hence, we find that the IS stress corresponds to a non-thermal contribution to the fluctuation term in the Born-Green expression. This leads to an unphysical divergence of the moduli in the low temperature limit if no proper correction for this term is applied. Furthermore, we clarify the IS stress dependence on the system size and put its origin on a more formal basis.
Resumo:
Flow in geophysical fluids is commonly summarized by coherent streams, for example conveyor belt flows in extratropical cyclones or jet streaks in the upper troposphere. Typically, parcel trajectories are calculated from the flow field and subjective thresholds are used to distinguish coherent streams of interest. This methodology contribution develops a more objective approach to distinguish coherent airstreams within extratropical cyclones. Agglomerative clustering is applied to trajectories along with a method to identify the optimal number of cluster classes. The methodology is applied to trajectories associated with the low-level jets of a well-studied extratropical cyclone. For computational efficiency, a constraint that trajectories must pass through these jet regions is applied prior to clustering; the partitioning into different airstreams is then performed by the agglomerative clustering. It is demonstrated that the methodology can identify the salient flow structures of cyclones: the warm and cold conveyor belts. A test focusing on the airstreams terminating at the tip of the bent-back front further demonstrates the success of the method in that it can distinguish fine-scale flow structure such as descending sting jet airstreams.
Resumo:
This paper proposes a limitation to epistemological claims to theory building prevalent in critical realist research. While accepting the basic ontological and epistemological positions of the perspective as developed by Roy Bhaskar, it is argued that application in social science has relied on sociological concepts to explain the underlying generative mechanisms, and that in many cases this has been subject to the effects of an anthropocentric constraint. A novel contribution to critical realist research comes from the work and ideas of Gregory Bateson. This is in service of two central goals of critical realism, namely an abductive route to theory building and a commitment to interdisciplinarity. Five aspects of Bateson’s epistemology are introduced: (1) difference, (2) logical levels of abstraction, (3) recursive causal loops, (4) the logic of metaphor, and (5) Bateson’s theory of mind. The comparison between Bateson and Bhaskar’s ideas is seen as a form of double description, illustrative of the point being raised. The paper concludes with an appeal to critical realists to start exploring the writing and outlook of Bateson himself.
Resumo:
The rise in international markets of new, productive Japanese car manufacturers provoked intense world competition, which created serious doubts about the economic sustainability of an industry mostly dominated until the 1970s by European and North-American multinational companies. Ultimately, this crisis provoked a deep transformation of the industry, with consequences that had a permanent impact on European companies in the sector. American and later European manufacturers were successful in lobbying governments to provide protection. Using a rich source of data from the UK, I show that the ‘new trade policy’, voluntary export restraint (VER), placed on Japanese exports of new cars from 1977 to December 1999, was binding. This case study illustrates the strategies used by Japanese manufacturers to gain access to the European market through the UK market via strategic alliances and later through transplant production, against which continental European nation states were unable to fully insulate themselves. It is also shown that the policy had a profound effect on the nature of Japanese products, as Japanese firms responded to the quantity restraints by radically altering the product characteristics of their automobiles and shifting towards larger autos and new goods, to maximise their profits subject to the binding constraint.
Resumo:
Recently, in light of minimalist assumptions, some partial UG accessibility accounts to adult second language acquisition have made a distinction between the post-critical period ability to acquire new features based on their LF-interpretability (i.e. interpretable vs. uninterpretable features) (HAWKINS, 2005; HAWKINS; HATTORI, 2006; TSIMPLI; MASTROPAVLOU, 2007; TSIMPLI; DIMITRAKOPOULOU, 2007). The Interpretability Hypothesis (TSIMPLI; MASTROPAVLOU, 2007; TSIMPLI; DIMITRAKOPOULOU, 2007) claims that only uninterpretable features suffer a post-critical period failure and, therefore, cannot be acquired. Conversely, Full Access approaches claim that L2 learners have full access to UG’s entire inventory of features, and that L1/L2 differences obtain outside the narrow syntax. The phenomenon studied herein, adult acquisition of the Overt Pronoun Constraint (OPC) (MONTALBETTI, 1984) and inflected infinitives in nonnative Portuguese, challenges the Interpretability hypothesis insofar as it makes the wrong predictions for what is observed. The present data demonstrate that advanced learners of L2 Portuguese acquire the OPC and the syntax and semantics of inflected infinitives with native-like accuracy. Since inflected infinitives require the acquisition of new uninterpretable φ-features, the present data provide evidence in contra Tsimpli and colleagues’ Interpretability Hypothesis.
Resumo:
In this article, along with others, we take the position that the Null-Subject Parameter (NSP) (Chomsky 1981; Rizzi 1982) cluster of properties is narrower in scope than some originally contended. We test for the resetting of the NSP by English L2 learners of Spanish at the intermediate level, including poverty-of-the stimulus knowledge of the Overt Pronoun Constraint (Montalbetti 1984). Our participants are tested before and after five months' residency in Spain in an effort to see if increased amounts of native exposure are particularly beneficial for parameter resetting. Although we demonstrate NSP resetting for some of the L2 learners, our data essentially demonstrate that even with the advent of time/exposure to native input, there is no immediate gainful effect for NSP resetting.
Resumo:
Although estimation of turbulent transport parameters using inverse methods is not new, there is little evaluation of the method in the literature. Here, it is shown that extended observation of the broad scale hydrography by Argo provides a path to improved estimates of regional turbulent transport rates. Results from a 20 year ocean state estimate produced with the ECCO v4 non-linear inverse modeling framework provide supporting evidence. Turbulent transport parameter maps are estimated under the constraints of fitting the extensive collection of Argo profiles collected through 2011. The adjusted parameters dramatically reduce misfits to in situ profiles as compared with earlier ECCO solutions. They also yield a clear reduction in the model drift away from observations over multi-century long simulations, both for assimilated variables (temperature and salinity) and independent variables (bio-geochemical tracers). Despite the minimal constraints imposed specifically on the estimated parameters, their geography is physically plausible and exhibits close connections with the upper ocean ocean stratification as observed by Argo. The estimated parameter adjustments furthermore have first order impacts on upper-ocean stratification and mixed layer depths over 20 years. These results identify the constraint of fitting Argo profiles as an effective observational basis for regional turbulent transport rates. Uncertainties and further improvements of the method are discussed.
Resumo:
Contemporary research in generative second language (L2) acquisition has attempted to address observable target-deviant aspects of L2 grammars within a UG-continuity framework (e.g. Lardiere 2000; Schwartz 2003; Sprouse 2004; Prévost & White 1999, 2000). With the aforementioned in mind, the independence of pragmatic and syntactic development, independently observed elsewhere (e.g. Grodzinsky & Reinhart 1993; Lust et al. 1986; Pacheco & Flynn 2005; Serratrice, Sorace & Paoli 2004), becomes particularly interesting. In what follows, I examine the resetting of the Null-Subject Parameter (NSP) for English learners of L2 Spanish. I argue that insensitivity to associated discoursepragmatic constraints on the discursive distribution of overt/null subjects accounts for what appear to be particular errors as a result of syntactic deficits. It is demonstrated that despite target-deviant performance, the majority must have native-like syntactic competence given their knowledge of the Overt Pronoun Constraint (Montalbetti 1984), a principle associated with the Spanish-type setting of the NSP.
Resumo:
Increasing optical depth poleward of 45° is a robust response to warming in global climate models. Much of this cloud optical depth increase has been hypothesized to be due to transitions from ice-dominated to liquid-dominated mixed-phase cloud. In this study, the importance of liquid-ice partitioning for the optical depth feedback is quantified for 19 Coupled Model Intercomparison Project Phase 5 models. All models show a monotonic partitioning of ice and liquid as a function of temperature, but the temperature at which ice and liquid are equally mixed (the glaciation temperature) varies by as much as 40 K across models. Models that have a higher glaciation temperature are found to have a smaller climatological liquid water path (LWP) and condensed water path and experience a larger increase in LWP as the climate warms. The ice-liquid partitioning curve of each model may be used to calculate the response of LWP to warming. It is found that the repartitioning between ice and liquid in a warming climate contributes at least 20% to 80% of the increase in LWP as the climate warms, depending on model. Intermodel differences in the climatological partitioning between ice and liquid are estimated to contribute at least 20% to the intermodel spread in the high-latitude LWP response in the mixed-phase region poleward of 45°S. It is hypothesized that a more thorough evaluation and constraint of global climate model mixed-phase cloud parameterizations and validation of the total condensate and ice-liquid apportionment against observations will yield a substantial reduction in model uncertainty in the high-latitude cloud response to warming.
Resumo:
The purpose of this paper is to investigate several analytical methods of solving first passage (FP) problem for the Rouse model, a simplest model of a polymer chain. We show that this problem has to be treated as a multi-dimensional Kramers' problem, which presents rich and unexpected behavior. We first perform direct and forward-flux sampling (FFS) simulations, and measure the mean first-passage time $\tau(z)$ for the free end to reach a certain distance $z$ away from the origin. The results show that the mean FP time is getting faster if the Rouse chain is represented by more beads. Two scaling regimes of $\tau(z)$ are observed, with transition between them varying as a function of chain length. We use these simulations results to test two theoretical approaches. One is a well known asymptotic theory valid in the limit of zero temperature. We show that this limit corresponds to fully extended chain when each chain segment is stretched, which is not particularly realistic. A new theory based on the well known Freidlin-Wentzell theory is proposed, where dynamics is projected onto the minimal action path. The new theory predicts both scaling regimes correctly, but fails to get the correct numerical prefactor in the first regime. Combining our theory with the FFS simulations lead us to a simple analytical expression valid for all extensions and chain lengths. One of the applications of polymer FP problem occurs in the context of branched polymer rheology. In this paper, we consider the arm-retraction mechanism in the tube model, which maps exactly on the model we have solved. The results are compared to the Milner-McLeish theory without constraint release, which is found to overestimate FP time by a factor of 10 or more.
Resumo:
A smoother introduced earlier by van Leeuwen and Evensen is applied to a problem in which real obser vations are used in an area with strongly nonlinear dynamics. The derivation is new , but it resembles an earlier derivation by van Leeuwen and Evensen. Again a Bayesian view is taken in which the prior probability density of the model and the probability density of the obser vations are combined to for m a posterior density . The mean and the covariance of this density give the variance-minimizing model evolution and its errors. The assumption is made that the prior probability density is a Gaussian, leading to a linear update equation. Critical evaluation shows when the assumption is justified. This also sheds light on why Kalman filters, in which the same ap- proximation is made, work for nonlinear models. By reference to the derivation, the impact of model and obser vational biases on the equations is discussed, and it is shown that Bayes’ s for mulation can still be used. A practical advantage of the ensemble smoother is that no adjoint equations have to be integrated and that error estimates are easily obtained. The present application shows that for process studies a smoother will give superior results compared to a filter , not only owing to the smooth transitions at obser vation points, but also because the origin of features can be followed back in time. Also its preference over a strong-constraint method is highlighted. Further more, it is argued that the proposed smoother is more efficient than gradient descent methods or than the representer method when error estimates are taken into account