67 resultados para Time equivalent approach

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modeling of on-body propagation channels is of paramount importance to those wishing to evaluate radio channel performance for wearable devices in body area networks (BANs). Difficulties in modeling arise due to the highly variable channel conditions related to changes in the user's state and local environment. This study characterizes these influences by using time-series analysis to examine and model signal characteristics for on-body radio channels in user stationary and mobile scenarios in four different locations: anechoic chamber, open office area, hallway, and outdoor environment. Autocorrelation and cross-correlation functions are reported and shown to be dependent on body state and surroundings. Autoregressive (AR) transfer functions are used to perform time-series analysis and develop models for fading in various on-body links. Due to the non-Gaussian nature of the logarithmically transformed observed signal envelope in the majority of mobile user states, a simple method for reproducing the failing based on lognormal and Nakagami statistics is proposed. The validity of the AR models is evaluated using hypothesis testing, which is based on the Ljung-Box statistic, and the estimated distributional parameters of the simulator output compared with those from experimental results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A time-dependent method for calculating the collective excitation frequencies and densities of a trapped, inhomogeneous Bose-Einstein condensate with circulation is presented. The results are compared with time-independent solutions of the Bogoliubov-de Gennes equations. The method is based on time-dependent linear-response theory combined with spectral analysis of moments of the excitation modes of interest. The technique is straightforward to apply, extremely efficient in our implementation with parallel fast Fourier transform methods, and produces highly accurate results. For high dimensionality or low symmetry the time-dependent approach is a more practical computational scheme and produces accurate and reliable data. The method is suitable for general trap geometries, condensate flows and condensates permeated with defects and vortex structures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The design of medical devices could be very much improved if robust tools were available for computational simulation of tissue response to the presence of the implant. Such tools require algorithms to simulate the response of tissues to mechanical and chemical stimuli. Available methodologies include those based on the principle of mechanical homeostasis, those which use continuum models to simulate biological constituents, and the cell-centred approach, which models cells as autonomous agents. In the latter approach, cell behaviour is governed by rules based on the state of the local environment around the cell; and informed by experiment. Tissue growth and differentiation requires simulating many of these cells together. In this paper, the methodology and applications of cell-centred techniques-with particular application to mechanobiology-are reviewed, and a cell-centred model of tissue formation in the lumen of an artery in response to the deployment of a stent is presented. The method is capable of capturing some of the most important aspects of restenosis, including nonlinear lesion growth with time. The approach taken in this paper provides a framework for simulating restenosis; the next step will be to couple it with more patient-specific geometries and quantitative parameter data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The hypervariable regions of immunoglobulin heavy-chain (IgH) rearrangements provide a specific tumor marker in multiple myeloma (MM). Recently, real-time PCR assays have been developed in order to quantify the number of tumor cells after treatment. However, these strategies are hampered by the presence of somatic hypermutation (SH) in VDJH rearrangements from multiple myeloma (MM) patients, which causes mismatches between primers and/or probes and the target, leading to a nonaccurate quantification of tumor cells. Our group has recently described a 60% incidence of incomplete DJH rearrangements in MM patients, with no or very low rates of SH. In this study, we compare the efficiency of a real-time PCR approach for the analysis of both complete and incomplete IgH rearrangements in eight MM patients using only three JH consensus probes. We were able to design an allele-specific oligonucleotide for both the complete and incomplete rearrangement in all patients. DJH rearrangements fulfilled the criteria of effectiveness for real-time PCR in all samples (ie no unspecific amplification, detection of less than 10 tumor cells within 10(5) polyclonal background and correlation coefficients of standard curves higher than 0.98). By contrast, only three out of eight VDJH rearrangements fulfilled these criteria. Further analyses showed that the remaining five VDJH rearrangements carried three or more somatic mutations in the probe and primer sites, leading to a dramatic decrease in the melting temperature. These results support the use of incomplete DJH rearrangements instead of complete somatically mutated VDJH rearrangements for investigation of minimal residual disease in multiple myeloma.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Quantifying nutrient and sediment loads in catchments is dif?cult owing to diffuse controls related to storm hydrology. Coarse sampling and interpolation methods are prone to very high uncertainties due to under-representation of high discharge, short duration events. Additionally, important low-?ow processes such as diurnal signals linked to point source impacts are missed. Here we demonstrate a solution based on a time-integrated approach to sampling with a standard 24 bottle autosampler con?gured to take a sample every 7 h over a week according to a Plynlimon design. This is evaluated with a number of other sampling strategies using a two-year dataset of sub-hourly discharge and phosphorus concentration data. The 24/7 solution is shown to be among the least uncertain in estimating load (inter-quartile range: 96% to 110% of actual load in year 1 and 97% to 104% in year 2) due to the increased frequency raising the probability of sampling storm events and point source signals. The 24/7 solution would appear to be most parsimonious in terms of data coverage and certainty, process signal representation, potential laboratory commitment, technology requirements and the ability to be widely deployed in complex catchments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increasing need to understand complex products and systems with long life spans, presents a significant challenge to designers who increasingly require a broader understanding of the operational aspects of the system. This demands an evolution in current design practice, as designers are often constrained to provide a subsystem solution without full knowledge of the global system operation. Recently there has been a push to consider value centric approaches which should facilitate better or more rapid convergence to design solutions with predictable completion schedules. Value Driven Design is one such approach, in which value is used as the system top level objective function. This provides a broader view of the system and enables all sub-systems and components to be designed with a view to the effect on project value. It also has the capacity to include value expressions for more qualitative aspects, such as environmental impact. However, application of the method to date has been restricted to comparing value in a programme where the lifespan is fixed and known a priori. This paper takes a novel view of value driven design through the surplus value objective function, and shows how it can be used to identify key sensitivities to guide designers in design trade-off decisions. By considering a new time based approach it can be used to identify optimum programme life-span and hence allow trade-offs over the whole product life.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate by numerical EM simulation the potential communication channel capacity of a reverberant environment using the time reversal approach, excited at 2.4 GHz by ON-OFF keyed RF pulse excitation. It is shown that approximately 725 1.25MHz propagation channels can be allocated with the cavity contains a 4×4 ? or 1×1 ? LOS obstruction positioned between the transceiver antenna and the time reversal unit. Furthermore the results show that two co-located transceiver dipoles separated by a spacing of 3?/4 can successfully resolve a 10ns pulse. Our findings suggest that different independent channels with identical operating frequency can be realized in an enclosed environment such as ventilation duct or underground tunnel. This suggests that there is a possibility of implementing a parallel channel radio link with the minimum inter-antenna spacing of 3?/4 between the transceivers in a rich multipath environment. © 2012 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Soil carbon stores are a major component of the annual returns required by EU governments to the Intergovernmental Panel on Climate Change. Peat has a high proportion of soil carbon due to the relatively high carbon density of peat and organic-rich soils. For this reason it has become increasingly important to measure and model soil carbon stores and changes in peat stocks to facilitate the management of carbon changes over time. The approach investigated in this research evaluates the use of airborne geophysical (radiometric) data to estimate peat thickness using the attenuation of bedrock geology radioactivity by superficial peat cover. Remotely sensed radiometric data are validated with ground peat depth measurements combined with non-invasive geophysical surveys. Two field-based case studies exemplify and validate the results. Variography and kriging are used to predict peat thickness from point measurements of peat depth and airborne radiometric data and provide an estimate of uncertainty in the predictions. Cokriging, by assessing the degree of spatial correlation between recent remote sensed geophysical monitoring and previous peat depth models, is used to examine changes in peat stocks over time. The significance of the coregionalisation is that the spatial cross correlation between the remote and ground based data can be used to update the model of peat depth. The result is that by integrating remotely sensed data with ground geophysics, the need is reduced for extensive ground-based monitoring and invasive peat depth measurements. The overall goal is to provide robust estimates of peat thickness to improve estimates of carbon stocks. The implications from the research have a broader significance that promotes a reduction in the need for damaging onsite peat thickness measurement and an increase in the use of remote sensed data for carbon stock estimations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce a task-based programming model and runtime system that exploit the observation that not all parts of a program are equally significant for the accuracy of the end-result, in order to trade off the quality of program outputs for increased energy-efficiency. This is done in a structured and flexible way, allowing for easy exploitation of different points in the quality/energy space, without adversely affecting application performance. The runtime system can apply a number of different policies to decide whether it will execute less-significant tasks accurately or approximately.

The experimental evaluation indicates that our system can achieve an energy reduction of up to 83% compared with a fully accurate execution and up to 35% compared with an approximate version employing loop perforation. At the same time, our approach always results in graceful quality degradation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The selective catalytic reduction (SCR) of NOx in the presence of different reducing agents over Ag/Al2O3 prepared by wet impregnation was investigated by probing catalyst activity and using NMR relaxation time analysis to probe the strength of surface interaction of the various reducing agent species and water. The results reveal that the strength of surface interaction of the reducing agent relative to water, the latter present in engine exhausts as a fuel combustion product and, in addition, produced during the SCR reaction, plays an important role in determining catalyst performance. Reducing agents with weak strength of interaction with the catalyst surface, such as hydrocarbons, show poorer catalytic performance than reducing agents with a higher strength of interaction, such as alcohols. This is attributed to the greater ability of oxygenated species to compete with water in terms of surface interaction with the catalyst surface, hence reducing the inhibiting effect of water molecules blocking catalyst sites. The results support the observations of earlier work in that the light off-temperature and maximum NOx conversion and temperature at which that occurs are sensitive to the reducing agent present during reaction, and the proposal that improved catalyst performance is caused by increased adsorption strength of the reducing agent, relative to water, at the catalyst surface. Importantly, the NMR relaxation time analysis approach to characterising the strength of adsorption more readily describes the trends in catalytic behaviour than does a straightforward consideration of the polarity (i.e., relative permittivity) of the reducing agents studied here. In summary, this paper describes a simple approach to characterising the interaction energy of water and reducing agent so as to aid the selection of reducing agent and catalyst to be used in SCR conversions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

EU’s deference to the Member State approaches in minority protection can intensify the oppression of
the vulnerable groups, its insistence on non-discrimination on the basis of nationality in the minority regions with special rights in place can equally produce injustice. Its inability to protect EU-wide minorities, like the Roma, is equally problematic. Although a ‘value’, minority protection functions incoherently, if at all. It is time to approach the EU as a highly specific minority protection arena not to be confused with its component parts – the Member States. The reform of the Member State-centred thinking should start at the level of approaching the core issues. It should include the assessment of such questions as what is a minority in the EU’s context of a missing majority, what is the appropriate depth of EU’s intervention in the area of minority protection, i.e. how much room for manœuvre should reasonably be left with the Member States without disrupting the effectiveness of EU’s regulation, as well as the approach to defining what a success in minority protection should be, in the
EU context. The latter should be done, in particular, with due regard to the division of competences between the EU and the Member States in this and other relevant fields. This paper briefly explores a series of diverse casestudies – from migrant EU citizens, Baltic Russians, and sexual minorities to, most importantly, Roma rights to make the first attempt to test the proposed synergetic approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Brown's model for the relaxation of the magnetization of a single domain ferromagnetic particle is considered. This model results in the Fokker-Planck equation of the process. The solution of this equation in the cases of most interest is non- trivial. The probability density of orientations of the magnetization in the Fokker-Planck equation can be expanded in terms of an infinite set of eigenfunctions and their corresponding eigenvalues where these obey a Sturm-Liouville type equation. A variational principle is applied to the solution of this equation in the case of an axially symmetric potential. The first (non-zero) eigenvalue, corresponding to the largest time constant, is considered. From this we obtain two new results. Firstly, an approximate minimising trial function is obtained which allows calculation of a rigorous upper bound. Secondly, a new upper bound formula is derived based on the Euler-Lagrange condition. This leads to very accurate calculation of the eigenvalue but also, interestingly, from this, use of the simplest trial function yields an equivalent result to the correlation time of Coffey et at. and the integral relaxation time of Garanin. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A dynamical method for simulating steady-state conduction in atomic and molecular wires is presented which is both computationally and conceptually simple. The method is tested by calculating the current-voltage spectrum of a simple diatomic molecular junction, for which the static Landauer approach produces multiple steady-state solutions. The dynamical method quantitatively reproduces the static results and provides information on the stability of the different solutions. (c) 2006 American Institute of Physics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An attosecond pump-probe scheme that combines the use of a free-electron laser pulse with an ultrashort pulse is applied in order to explore the ultrafast excitation dynamics in Ne. We describe the multielectron dynamics using a new nonperturbative time-dependent R-matrix theory. This theory enables the interaction of ultrashort light fields with multielectron atoms and atomic ions to be determined from first principles. By probing the emission of an inner 2s electron from Ne we are also able to study the bound state population dynamics during the free-electron laser pulse.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The implementation of effective time analysis methods fast and accurately in the era of digital manufacturing has become a significant challenge for aerospace manufacturers hoping to build and maintain a competitive advantage. This paper proposes a structure oriented, knowledge-based approach for intelligent time analysis of aircraft assembly processes within a digital manufacturing framework. A knowledge system is developed so that the design knowledge can be intelligently retrieved for implementing assembly time analysis automatically. A time estimation method based on MOST, is reviewed and employed. Knowledge capture, transfer and storage within the digital manufacturing environment are extensively discussed. Configured plantypes, GUIs and functional modules are designed and developed for the automated time analysis. An exemplar study using an aircraft panel assembly from a regional jet is also presented. Although the method currently focuses on aircraft assembly, it can also be well utilized in other industry sectors, such as transportation, automobile and shipbuilding. The main contribution of the work is to present a methodology that facilitates the integration of time analysis with design and manufacturing using a digital manufacturing platform solution.