12 resultados para TIME-VARIABLE GRAVITY
em Aston University Research Archive
Resumo:
A numerical method for the Dirichlet initial boundary value problem for the heat equation in the exterior and unbounded region of a smooth closed simply connected 3-dimensional domain is proposed and investigated. This method is based on a combination of a Laguerre transformation with respect to the time variable and an integral equation approach in the spatial variables. Using the Laguerre transformation in time reduces the parabolic problem to a sequence of stationary elliptic problems which are solved by a boundary layer approach giving a sequence of boundary integral equations of the first kind to solve. Under the assumption that the boundary surface of the solution domain has a one-to-one mapping onto the unit sphere, these integral equations are transformed and rewritten over this sphere. The numerical discretisation and solution are obtained by a discrete projection method involving spherical harmonic functions. Numerical results are included.
Resumo:
In this work, we introduce the periodic nonlinear Fourier transform (PNFT) method as an alternative and efficacious tool for compensation of the nonlinear transmission effects in optical fiber links. In the Part I, we introduce the algorithmic platform of the technique, describing in details the direct and inverse PNFT operations, also known as the inverse scattering transform for periodic (in time variable) nonlinear Schrödinger equation (NLSE). We pay a special attention to explaining the potential advantages of the PNFT-based processing over the previously studied nonlinear Fourier transform (NFT) based methods. Further, we elucidate the issue of the numerical PNFT computation: we compare the performance of four known numerical methods applicable for the calculation of nonlinear spectral data (the direct PNFT), in particular, taking the main spectrum (utilized further in Part II for the modulation and transmission) associated with some simple example waveforms as the quality indicator for each method. We show that the Ablowitz-Ladik discretization approach for the direct PNFT provides the best performance in terms of the accuracy and computational time consumption.
Resumo:
The performance of seven minimization algorithms are compared on five neural network problems. These include a variable-step-size algorithm, conjugate gradient, and several methods with explicit analytic or numerical approximations to the Hessian.
Resumo:
With the extensive use of pulse modulation methods in telecommunications, much work has been done in the search for a better utilisation of the transmission channel.The present research is an extension of these investigations. A new modulation method, 'Variable Time-Scale Information Processing', (VTSIP), is proposed.The basic principles of this system have been established, and the main advantages and disadvantages investigated. With the proposed system, comparison circuits detect the instants at which the input signal voltage crosses predetermined amplitude levels.The time intervals between these occurrences are measured digitally and the results are temporarily stored, before being transmitted.After reception, an inverse process enables the original signal to be reconstituted.The advantage of this system is that the irregularities in the rate of information contained in the input signal are smoothed out before transmission, allowing the use of a smaller transmission bandwidth. A disadvantage of the system is the time delay necessarily introduced by the storage process.Another disadvantage is a type of distortion caused by the finite store capacity.A simulation of the system has been made using a standard speech signal, to make some assessment of this distortion. It is concluded that the new system should be an improvement on existing pulse transmission systems, allowing the use of a smaller transmission bandwidth, but introducing a time delay.
Resumo:
Purpose: To assess repeatability and reproducibility, to determine normative data, and to investigate the effect of age-related macular disease, compared with normals, on photostress recovery time measured using the Eger Macular Stressometer (EMS). Method: The study population comprised 49 healthy eyes of 49 participants. Four EMS measurements were taken in two sessions separated by 1 h by two practitioners, with reversal of order in the second session. EMS readings were also taken from 17 age-related maculopathy (ARM), and 12 age-related macular degeneration (AMD), affected eyes. Results: EMS readings are repeatable to within ± 7 s. There is a statistically significant difference between controls and ARM affected eyes (t = 2.169, p = 0.045), and AMD affected eyes (t = 2.817, p = 0.016). The EMS is highly specific, and demonstrates sensitivity of 29% for ARM, and 50% for AMD. Conclusions: The EMS may be a useful screening test for ARM, however, direct illumination of the macula of greater intensity and longer duration may yield less variable results. © 2004 The College of Optometrists.
Resumo:
At present there is no standard assessment method for rating and comparing the quality of synthesized speech. This study assesses the suitability of Time Frequency Warping (TFW) modulation for use as a reference device for assessing synthesized speech. Time Frequency Warping modulation introduces timing errors into natural speech that produce perceptual errors similar to those found in synthetic speech. It is proposed that TFW modulation used in conjunction with a listening effort test would provide a standard assessment method for rating the quality of synthesized speech. This study identifies the most suitable TFW modulation variable parameter to be used for assessing synthetic speech and assess the results of several assessment tests that rate examples of synthesized speech in terms of the TFW variable parameter and listening effort. The study also attempts to identify the attributes of speech that differentiate synthetic, TFW modulated and natural speech.
Resumo:
We investigate a mixed problem with variable lateral conditions for the heat equation that arises in modelling exocytosis, i.e. the opening of a cell boundary in specific biological species for the release of certain molecules to the exterior of the cell. The Dirichlet condition is imposed on a surface patch of the boundary and this patch is occupying a larger part of the boundary as time increases modelling where the cell is opening (the fusion pore), and on the remaining part, a zero Neumann condition is imposed (no molecules can cross this boundary). Uniform concentration is assumed at the initial time. We introduce a weak formulation of this problem and show that there is a unique weak solution. Moreover, we give an asymptotic expansion for the behaviour of the solution near the opening point and for small values in time. We also give an integral equation for the numerical construction of the leading term in this expansion.
Resumo:
Poly(ε-caprolactone) (PCL) fibres were produced by wet spinning from solutions in acetone under low shear (gravity flow) conditions. As-spun PCL fibres exhibited a mean strength and stiffness of 7.9 MPa and 0.1 GPa, respectively and a rough, porous surface morphology. Cold drawing to an extension of 500% resulted in increases in fibre strength (43 MPa) and stiffness (0.3 GPa) and development of an oriented, fibrillar surface texture. The proliferation rate of Swiss 3T3 mouse fibroblasts and C2C12 mouse myoblasts on as-spun, 500% cold-drawn and gelatin-modified PCL fibres was determined in cell culture to provide a basic measure of the biocompatibility of the fibres. Proliferation of both cell types was consistently higher on gelatin-coated fibres relative to as-spun fibres at time points below 7 days. Fibroblast growth rates on cold-drawn PCL fibres exceeded those on as-spun fibres but myoblast proliferation was similar on both substrates. After 1 day in culture, both cell types had spread and coalesced on the fibres to form a cell layer, which conformed closely to the underlying topography. The high fibre compliance combined with a potential for modifying the fibre surface chemistry with cell adhesion molecules and the surface architecture by cold drawing to enhance proliferation of fibroblasts and myoblasts, recommends further investigation of gravity-spun PCL fibres for 3-D scaffold production in soft tissue engineering. © 2005 Elsevier Ltd. All rights reserved.
Resumo:
The importance of informal institutions and in particular culture for entrepreneurship is a subject of ongoing interest. Past research has mostly concentrated on cross-national comparisons, cultural values, and the direct effects of culture on entrepreneurial behavior, but in the main found inconsistent results. The present research adds a fresh perspective to this research stream by turning attention to community-level culture and cultural norms. We hypothesize indirect effects of cultural norms on venture emergence. Specifically that community-level cultural norms (performance-based culture and socially-supportive institutional norms) impact important supply-side variables (entrepreneurial self-efficacy and entrepreneurial motivation) which in turn influence nascent entrepreneurs’ success in creating operational ventures (venture emergence). We test our predictions on a unique longitudinal data set (PSED II) tracking nascent entrepreneurs venture creation efforts over a 5 year time span and find evidence supporting them. Our research contributes to a more fine-grained understanding of how culture, in particular perceptions of community cultural norms, influences venture emergence. This research highlights the embeddedness of entrepreneurial behavior and its immediate antecedent beliefs in the local, community context.
Resumo:
A new 3D implementation of a hybrid model based on the analogy with two-phase hydrodynamics has been developed for the simulation of liquids at microscale. The idea of the method is to smoothly combine the atomistic description in the molecular dynamics zone with the Landau-Lifshitz fluctuating hydrodynamics representation in the rest of the system in the framework of macroscopic conservation laws through the use of a single "zoom-in" user-defined function s that has the meaning of a partial concentration in the two-phase analogy model. In comparison with our previous works, the implementation has been extended to full 3D simulations for a range of atomistic models in GROMACS from argon to water in equilibrium conditions with a constant or a spatially variable function s. Preliminary results of simulating the diffusion of a small peptide in water are also reported.
Resumo:
This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.
Resumo:
For evolving populations of replicators, there is much evidence that the effect of mutations on fitness depends on the degree of adaptation to the selective pressures at play. In optimized populations, most mutations have deleterious effects, such that low mutation rates are favoured. In contrast to this, in populations thriving in changing environments a larger fraction of mutations have beneficial effects, providing the diversity necessary to adapt to new conditions. What is more, non-adapted populations occasionally benefit from an increase in the mutation rate. Therefore, there is no optimal universal value of the mutation rate and species attempt to adjust it to their momentary adaptive needs. In this work we have used stationary populations of RNA molecules evolving in silico to investigate the relationship between the degree of adaptation of an optimized population and the value of the mutation rate promoting maximal adaptation in a short time to a new selective pressure. Our results show that this value can significantly differ from the optimal value at mutation-selection equilibrium, being strongly influenced by the structure of the population when the adaptive process begins. In the short-term, highly optimized populations containing little variability respond better to environmental changes upon an increase of the mutation rate, whereas populations with a lower degree of optimization but higher variability benefit from reducing the mutation rate to adapt rapidly. These findings show a good agreement with the behaviour exhibited by actual organisms that replicate their genomes under broadly different mutation rates. © 2010 Stich et al.