43 resultados para germs of holomorphic generalized functions
Resumo:
We apply a new X-ray scattering approach to the study of melt-spun filaments of tri-block and random terpolymers prepared from lactide, caprolactone and glycolide. Both terpolymers contain random sequences, in both cases the overall fraction of lactide units is similar to 0.7 and C-13 and H-1 NMR shows the lactide sequence length to be similar to 9-10. A novel representation of the X-ray fibre pattern as series of spherical harmonic functions considerably facilitates the comparison of the scattering from the minority crystalline phase with hot drawn fibres prepared from the poly(L-lactide) homopolymer. Although the fibres exhibit rather disordered structures we show that the crystal structure is equivalent to that displayed by poly(L-lactide) for both the block and random terpolymers. There are variations in the development of a two-phase structure which reflect the differences in the chain architectures. There is evidence that the random terpolymer includes non-lactide units in to the crystal interfaces to achieve a well defined two-phase structure. (c) 2005 Published by Elsevier Ltd.
Resumo:
How do organizations previously dominated by the state develop dynamic capabilities that would support their growth in a competitive market economy? We develop a theoretical framework of organizational transformation that explains the processes by which organizations learn and develop dynamic capabilities in transition economies. Specifically, the framework theorizes about the importance of, and inter-relationships between, leadership, organizational learning, dynamic capabilities, and performance over three stages of transformation. Propositions derived from this framework explain the pre-conditions enabling organizational learning, the linkages between types of learning and functions of dynamic capabilities, and the feedback from dynamic capabilities to organizational learning that allows firms in transition economies to regain their footing and build long-term competitive advantage. We focus on transition contexts, where these processes have been magnified and thus offer new insights into strategizing in radically altered environments.
Resumo:
We use proper orthogonal decomposition (POD) to study a transient teleconnection event at the onset of the 2001 planet-encircling dust storm on Mars, in terms of empirical orthogonal functions (EOFs). There are several differences between this and previous studies of atmospheric events using EOFs. First, instead of using a single variable such as surface pressure or geopotential height on a given pressure surface, we use a dataset describing the evolution in time of global and fully three-dimensional atmospheric fields such as horizontal velocity and temperature. These fields are produced by assimilating Thermal Emission Spectrometer observations from NASA's Mars Global Surveyor spacecraft into a Mars general circulation model. We use total atmospheric energy (TE) as a physically meaningful quantity which weights the state variables. Second, instead of adopting the EOFs to define teleconnection patterns as planetary-scale correlations that explain a large portion of long time-scale variability, we use EOFs to understand transient processes due to localised heating perturbations that have implications for the atmospheric circulation over distant regions. The localised perturbation is given by anomalous heating due to the enhanced presence of dust around the northern edge of the Hellas Planitia basin on Mars. We show that the localised disturbance is seemingly restricted to a small number (a few tens) of EOFs. These can be classified as low-order, transitional, or high-order EOFs according to the TE amount they explain throughout the event. Despite the global character of the EOFs, they show the capability of accounting for the localised effects of the perturbation via the presence of specific centres of action. We finally discuss possible applications for the study of terrestrial phenomena with similar characteristics.
Resumo:
Recent studies into price transmission have recognized the important role played by transport and transaction costs. Threshold models are one approach to accommodate such costs. We develop a generalized Threshold Error Correction Model to test for the presence and form of threshold behavior in price transmission that is symmetric around equilibrium. We use monthly wheat, maize, and soya prices from the United States, Argentina, and Brazil to demonstrate this model. Classical estimation of these generalized models can present challenges but Bayesian techniques avoid many of these problems. Evidence for thresholds is found in three of the five commodity price pairs investigated.
Resumo:
This paper presents a new method for the inclusion of nonlinear demand and supply relationships within a linear programming model. An existing method for this purpose is described first and its shortcomings are pointed out before showing how the new approach overcomes those difficulties and how it provides a more accurate and 'smooth' (rather than a kinked) approximation of the nonlinear functions as well as dealing with equilibrium under perfect competition instead of handling just the monopolistic situation. The workings of the proposed method are illustrated by extending a previously available sectoral model for the UK agriculture.
Resumo:
Background: The present paper investigates the question of a suitable basic model for the number of scrapie cases in a holding and applications of this knowledge to the estimation of scrapie-ffected holding population sizes and adequacy of control measures within holding. Is the number of scrapie cases proportional to the size of the holding in which case it should be incorporated into the parameter of the error distribution for the scrapie counts? Or, is there a different - potentially more complex - relationship between case count and holding size in which case the information about the size of the holding should be better incorporated as a covariate in the modeling? Methods: We show that this question can be appropriately addressed via a simple zero-truncated Poisson model in which the hypothesis of proportionality enters as a special offset-model. Model comparisons can be achieved by means of likelihood ratio testing. The procedure is illustrated by means of surveillance data on classical scrapie in Great Britain. Furthermore, the model with the best fit is used to estimate the size of the scrapie-affected holding population in Great Britain by means of two capture-recapture estimators: the Poisson estimator and the generalized Zelterman estimator. Results: No evidence could be found for the hypothesis of proportionality. In fact, there is some evidence that this relationship follows a curved line which increases for small holdings up to a maximum after which it declines again. Furthermore, it is pointed out how crucial the correct model choice is when applied to capture-recapture estimation on the basis of zero-truncated Poisson models as well as on the basis of the generalized Zelterman estimator. Estimators based on the proportionality model return very different and unreasonable estimates for the population sizes. Conclusion: Our results stress the importance of an adequate modelling approach to the association between holding size and the number of cases of classical scrapie within holding. Reporting artefacts and speculative biological effects are hypothesized as the underlying causes of the observed curved relationship. The lack of adjustment for these artefacts might well render ineffective the current strategies for the control of the disease.
Resumo:
Kinetic studies on the AR (aldose reductase) protein have shown that it does not behave as a classical enzyme in relation to ring aldose sugars. As with non-enzymatic glycation reactions, there is probably a free radical element involved derived from monosaccharide autoxidation. in the case of AR, there is free radical oxidation of NADPH by autoxidizing monosaccharides, which is enhanced in the presence of the NADPH-binding protein. Thus any assay for AR based on the oxidation of NADPH in the presence of autoxidizing monosaccharides is invalid, and tissue AR measurements based on this method are also invalid, and should be reassessed. AR exhibits broad specificity for both hydrophilic and hydrophobic aldehydes that suggests that the protein may be involved in detoxification. The last thing we would want to do is to inhibit it. ARIs (AR inhibitors) have a number of actions in the cell which are not specific, and which do not involve them binding to AR. These include peroxy-radical scavenging and effects of metal ion chelation. The AR/ARI story emphasizes the importance of correct experimental design in all biocatalytic experiments. Developing the use of Bayesian utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has led to the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-m and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimizes the error in the parameters estimated, and is suitable for simple or complex steady-state models.
Resumo:
A full dimensional, ab initio-based semiglobal potential energy surface for C2H3+ is reported. The ab initio electronic energies for this molecule are calculated using the spin-restricted, coupled cluster method restricted to single and double excitations with triples corrections [RCCSD(T)]. The RCCSD(T) method is used with the correlation-consistent polarized valence triple-zeta basis augmented with diffuse functions (aug-cc-pVTZ). The ab initio potential energy surface is represented by a many-body (cluster) expansion, each term of which uses functions that are fully invariant under permutations of like nuclei. The fitted potential energy surface is validated by comparing normal mode frequencies at the global minimum and secondary minimum with previous and new direct ab initio frequencies. The potential surface is used in vibrational analysis using the "single-reference" and "reaction-path" versions of the code MULTIMODE. (c) 2006 American Institute of Physics.
Resumo:
The visual perception of size in different regions of external space was studied in Parkinson's disease (PD). A group of patients with worse left-sided symptoms (LPD) was compared with a group with worse right-sided symptoms (RPD) and with a group of age-matched controls on judgements of the relative height or width of two rectangles presented in different regions of external space. The relevant dimension of one rectangle (the 'standard') was held constant, while that of the other (the 'variable') was varied in a method of constant stimuli. The point of subjective equality (PSE) of rectangle width or height was obtained by probit analysis as the mean of the resulting psychometric function. When the standard was in left space, the PSE of the LPD group occurred when the variable was smaller, and when the standard was in right space, when the variable was larger. Similarly, when the standard rectangle was presented in upper space, and the variable in lower space, the PSE occurred when the variable was smaller, an effect which was similar in both left and right spaces. In all these experiments, the PSEs for both the controls and the RPD group did not differ significantly, and were close to a physical match, and the slopes of the psychometric functions were steeper in the controls than the patients, though not significantly so. The data suggest that objects appear smaller in the left and upper visual spaces in LPD, probably because of right hemisphere impairment. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper examines optimal solutions of control systems with drift defined on the orthonormal frame bundle of particular Riemannian manifolds of constant curvature. The manifolds considered here are the space forms Euclidean space E-3, the spheres S-3 and the hyperboloids H-3 with the corresponding frame bundles equal to the Euclidean group of motions SE(3), the rotation group SO(4) and the Lorentz group SO(1,3). The optimal controls of these systems are solved explicitly in terms of elliptic functions. In this paper, a geometric interpretation of the extremal solutions is given with particular emphasis to a singularity in the explicit solutions. Using a reduced form of the Casimir functions the geometry of these solutions are illustrated.
Resumo:
Utilising the expressive power of S-Expressions in Learning Classifier Systems often prohibitively increases the search space due to increased flexibility of the endcoding. This work shows that selection of appropriate S-Expression functions through domain knowledge improves scaling in problems, as expected. It is also known that simple alphabets perform well on relatively small sized problems in a domain, e.g. ternary alphabet in the 6, 11 and 20 bit MUX domain. Once fit ternary rules have been formed it was investigated whether higher order learning was possible and whether this staged learning facilitated selection of appropriate functions in complex alphabets, e.g. selection of S-Expression functions. This novel methodology is shown to provide compact results (135-MUX) and exhibits potential for scaling well (1034-MUX), but is only a small step towards introducing abstraction to LCS.
Resumo:
This paper considers the motion planning problem for oriented vehicles travelling at unit speed in a 3-D space. A Lie group formulation arises naturally and the vehicles are modeled as kinematic control systems with drift defined on the orthonormal frame bundles of particular Riemannian manifolds, specifically, the 3-D space forms Euclidean space E-3, the sphere S-3, and the hyperboloid H'. The corresponding frame bundles are equal to the Euclidean group of motions SE(3), the rotation group SO(4), and the Lorentz group SO (1, 3). The maximum principle of optimal control shifts the emphasis for these systems to the associated Hamiltonian formalism. For an integrable case, the extremal curves are explicitly expressed in terms of elliptic functions. In this paper, a study at the singularities of the extremal curves are given, which correspond to critical points of these elliptic functions. The extremal curves are characterized as the intersections of invariant surfaces and are illustrated graphically at the singular points. It. is then shown that the projections, of the extremals onto the base space, called elastica, at these singular points, are curves of constant curvature and torsion, which in turn implies that the oriented vehicles trace helices.
Resumo:
Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.
Resumo:
A distributed Lagrangian moving-mesh finite element method is applied to problems involving changes of phase. The algorithm uses a distributed conservation principle to determine nodal mesh velocities, which are then used to move the nodes. The nodal values are obtained from an ALE (Arbitrary Lagrangian-Eulerian) equation, which represents a generalization of the original algorithm presented in Applied Numerical Mathematics, 54:450--469 (2005). Having described the details of the generalized algorithm it is validated on two test cases from the original paper and is then applied to one-phase and, for the first time, two-phase Stefan problems in one and two space dimensions, paying particular attention to the implementation of the interface boundary conditions. Results are presented to demonstrate the accuracy and the effectiveness of the method, including comparisons against analytical solutions where available.
Resumo:
Under the Public Bodies Bill 2010, the HFEA, cornerstone in the regulation of assisted reproduction technologies (ART) for the last twenty years, is due to be abolished. This implies that there is no longer a need for a dedicated regulator for ART and that the existing roles of the Authority as both operational compliance monitor, and instance of ethical evaluation, may be absorbed by existing healthcare regulators. This article presents a timely analysis of these disparate functions of the HFEA, charting reforms adopted in 2008 and assessing the impact of the current proposals. Taking assisted conception treatment as the focus activity, it will be shown that the last few years have seen a concentration on the HFEA as a technical regulator based upon the principles of Better Regulation, with little analysis of how the ethical responsibility of the Authority fits into this framework. The current proposal to abolish the HFEA continues to fail to address this crucial question. Notwithstanding the fact that the scope of the Authority's ethical role may be questioned, its abolition requires that the Government consider what alternatives exists - or need to be put in place - to provide both responsive operational regulation and a forum for ethical reflection and decision-making in an area which continues to pose regulatory challenges