957 resultados para Classical formulation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Travel time is an important network performance measure and it quantifies congestion in a manner easily understood by all transport users. In urban networks, travel time estimation is challenging due to number of reasons such as, fluctuations in traffic flow due to traffic signals, significant flow to/from mid link sinks/sources, etc. The classical analytical procedure utilizes cumulative plots at upstream and downstream locations for estimating travel time between the two locations. In this paper, we discuss about the issues and challenges with classical analytical procedure such as its vulnerability to non conservation of flow between the two locations. The complexity with respect to exit movement specific travel time is discussed. Recently, we have developed a methodology utilising classical procedure to estimate average travel time and its statistic on urban links (Bhaskar, Chung et al. 2010). Where, detector, signal and probe vehicle data is fused. In this paper we extend the methodology for route travel time estimation and test its performance using simulation. The originality is defining cumulative plots for each exit turning movement utilising historical database which is self updated after each estimation. The performance is also compared with a method solely based on probe (Probe-only). The performance of the proposed methodology has been found insensitive to different route flow, with average accuracy of more than 94% given a probe per estimation interval which is more than 5% increment in accuracy with respect to Probe-only method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quality of discovered features in relevance feedback (RF) is the key issue for effective search query. Most existing feedback methods do not carefully address the issue of selecting features for noise reduction. As a result, extracted noisy features can easily contribute to undesirable effectiveness. In this paper, we propose a novel feature extraction method for query formulation. This method first extract term association patterns in RF as knowledge for feature extraction. Negative RF is then used to improve the quality of the discovered knowledge. A novel information filtering (IF) model is developed to evaluate the proposed method. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics confirm that the proposed model achieved encouraging performance compared to state-of-the-art IF models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Key establishment is a crucial primitive for building secure channels in a multi-party setting. Without quantum mechanics, key establishment can only be done under the assumption that some computational problem is hard. Since digital communication can be easily eavesdropped and recorded, it is important to consider the secrecy of information anticipating future algorithmic and computational discoveries which could break the secrecy of past keys, violating the secrecy of the confidential channel. Quantum key distribution (QKD) can be used generate secret keys that are secure against any future algorithmic or computational improvements. QKD protocols still require authentication of classical communication, although existing security proofs of QKD typically assume idealized authentication. It is generally considered folklore that QKD when used with computationally secure authentication is still secure against an unbounded adversary, provided the adversary did not break the authentication during the run of the protocol. We describe a security model for quantum key distribution extending classical authenticated key exchange (AKE) security models. Using our model, we characterize the long-term security of the BB84 QKD protocol with computationally secure authentication against an eventually unbounded adversary. By basing our model on traditional AKE models, we can more readily compare the relative merits of various forms of QKD and existing classical AKE protocols. This comparison illustrates in which types of adversarial environments different quantum and classical key agreement protocols can be secure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new dualscale modelling approach is presented for simulating the drying of a wet hygroscopic porous material that couples the porous medium (macroscale) with the underlying pore structure (microscale). The proposed model is applied to the convective drying of wood at low temperatures and is valid in the so-called hygroscopic range, where hygroscopically held liquid water is present in the solid phase and water exits only as vapour in the pores. Coupling between scales is achieved by imposing the macroscopic gradients of moisture content and temperature on the microscopic field using suitably-defined periodic boundary conditions, which allows the macroscopic mass and thermal fluxes to be defined as averages of the microscopic fluxes over the unit cell. This novel formulation accounts for the intricate coupling of heat and mass transfer at the microscopic scale but reduces to a classical homogenisation approach if a linear relationship is assumed between the microscopic gradient and flux. Simulation results for a sample of spruce wood highlight the potential and flexibility of the new dual-scale approach. In particular, for a given unit cell configuration it is not necessary to propose the form of the macroscopic fluxes prior to the simulations because these are determined as a direct result of the dual-scale formulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: HIV-1 Pr55gag virus-like particles (VLPs) expressed by baculovirus in insect cells are considered to be a very promising HIV-1 vaccine candidate, as they have been shown to elicit broad cellular immune responses when tested in animals, particularly when used as a boost to DNA or BCG vaccines. However, it is important for the VLPs to retain their structure for them to be fully functional and effective. The medium in which the VLPs are formulated and the temperature at which they are stored are two important factors affecting their stability. FINDINGS We describe the screening of 3 different readily available formulation media (sorbitol, sucrose and trehalose) for their ability to stabilise HIV-1 Pr55gag VLPs during prolonged storage. Transmission electron microscopy (TEM) was done on VLPs stored at two different concentrations of the media at three different temperatures (4[degree sign]C, --20[degree sign]C and -70[degree sign]C) over different time periods, and the appearance of the VLPs was compared. VLPs stored in 15% trehalose at -70[degree sign]C retained their original appearance the most effectively over a period of 12 months. VLPs stored in 5% trehalose, sorbitol or sucrose were not all intact even after 1 month storage at the temperatures tested. In addition, we showed that VLPs stored under these conditions were able to be frozen and re-thawed twice before showing changes in their appearance. Conclusions Although the inclusion of other analytical tools are essential to validate these preliminary findings, storage in 15% trehalose at -70[degree sign]C for 12 months is most effective in retaining VLP stability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The extraordinary event, for Deleuze, is the object becoming subject – not in the manner of an abstract formulation, such as the substitution of one ideational representation for another but, rather, in the introduction of a vast, new, impersonal plane of subjectivity, populated by object processes and physical phenomena that in Deleuze’s discovery will be shown to constitute their own subjectivities. Deleuze’s polemic of subjectivity (the refusal of the Cartesian subject and the transcendental ego of Husserl) – long attempted by other thinkers – is unique precisely because it heralds the dawning of a new species of objecthood that will qualify as its own peculiar subjectivity. A survey of Deleuze’s early work on subjectivity, Empirisme et subjectivité (Deleuze 1953), Le Bergsonisme (Deleuze 1968), and Logique du sens (Deleuze 1969), brings the architectural reader into a peculiar confrontation with what Deleuze calls the ‘new transcendental field’, the field of subjectproducing effects, which for the philosopher takes the place of both the classical and modern subject. Deleuze’s theory of consciousness and perception is premised on the critique of Husserlian phenomenology; and ipso facto his question is an architectural problematic, even if the name ‘architecture’ is not invoked...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one pre-processes the source image and template/model with a bank of filters (e.g. oriented edges, Gabor, etc.) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, and (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to non-rigid object alignment tasks that are considered extensions of the LK algorithm such as those found in Active Appearance Models (AAMs).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A long-running issue in appetite research concerns the influence of energy expenditure on energy intake. More than 50 years ago, Otto G. Edholm proposed that "the differences between the intakes of food [of individuals] must originate in differences in the expenditure of energy". However, a relationship between energy expenditure and energy intake within any one day could not be found, although there was a correlation over 2 weeks. This issue was never resolved before interest in integrative biology was replaced by molecular biochemistry. Using a psychobiological approach, we have studied appetite control in an energy balance framework using a multi-level experimental system on a single cohort of overweight and obese human subjects. This has disclosed relationships between variables in the domains of body composition [fat-free mass (FFM), fat mass (FM)], metabolism, gastrointestinal hormones, hunger and energy intake. In this Commentary, we review our own and other data, and discuss a new formulation whereby appetite control and energy intake are regulated by energy expenditure. Specifically, we propose that FFM (the largest contributor to resting metabolic rate), but not body mass index or FM, is closely associated with self-determined meal size and daily energy intake. This formulation has implications for understanding weight regulation and the management of obesity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For the timber industry, the ability to simulate the drying of wood is invaluable for manufacturing high quality wood products. Mathematically, however, modelling the drying of a wet porous material, such as wood, is a diffcult task due to its heterogeneous and anisotropic nature, and the complex geometry of the underlying pore structure. The well{ developed macroscopic modelling approach involves writing down classical conservation equations at a length scale where physical quantities (e.g., porosity) can be interpreted as averaged values over a small volume (typically containing hundreds or thousands of pores). This averaging procedure produces balance equations that resemble those of a continuum with the exception that effective coeffcients appear in their deffnitions. Exponential integrators are numerical schemes for initial value problems involving a system of ordinary differential equations. These methods differ from popular Newton{Krylov implicit methods (i.e., those based on the backward differentiation formulae (BDF)) in that they do not require the solution of a system of nonlinear equations at each time step but rather they require computation of matrix{vector products involving the exponential of the Jacobian matrix. Although originally appearing in the 1960s, exponential integrators have recently experienced a resurgence in interest due to a greater undertaking of research in Krylov subspace methods for matrix function approximation. One of the simplest examples of an exponential integrator is the exponential Euler method (EEM), which requires, at each time step, approximation of φ(A)b, where φ(z) = (ez - 1)/z, A E Rnxn and b E Rn. For drying in porous media, the most comprehensive macroscopic formulation is TransPore [Perre and Turner, Chem. Eng. J., 86: 117-131, 2002], which features three coupled, nonlinear partial differential equations. The focus of the first part of this thesis is the use of the exponential Euler method (EEM) for performing the time integration of the macroscopic set of equations featured in TransPore. In particular, a new variable{ stepsize algorithm for EEM is presented within a Krylov subspace framework, which allows control of the error during the integration process. The performance of the new algorithm highlights the great potential of exponential integrators not only for drying applications but across all disciplines of transport phenomena. For example, when applied to well{ known benchmark problems involving single{phase liquid ow in heterogeneous soils, the proposed algorithm requires half the number of function evaluations than that required for an equivalent (sophisticated) Newton{Krylov BDF implementation. Furthermore for all drying configurations tested, the new algorithm always produces, in less computational time, a solution of higher accuracy than the existing backward Euler module featured in TransPore. Some new results relating to Krylov subspace approximation of '(A)b are also developed in this thesis. Most notably, an alternative derivation of the approximation error estimate of Hochbruck, Lubich and Selhofer [SIAM J. Sci. Comput., 19(5): 1552{1574, 1998] is provided, which reveals why it performs well in the error control procedure. Two of the main drawbacks of the macroscopic approach outlined above include the effective coefficients must be supplied to the model, and it fails for some drying configurations, where typical dual{scale mechanisms occur. In the second part of this thesis, a new dual{scale approach for simulating wood drying is proposed that couples the porous medium (macroscale) with the underlying pore structure (microscale). The proposed model is applied to the convective drying of softwood at low temperatures and is valid in the so{called hygroscopic range, where hygroscopically held liquid water is present in the solid phase and water exits only as vapour in the pores. Coupling between scales is achieved by imposing the macroscopic gradient on the microscopic field using suitably defined periodic boundary conditions, which allows the macroscopic ux to be defined as an average of the microscopic ux over the unit cell. This formulation provides a first step for moving from the macroscopic formulation featured in TransPore to a comprehensive dual{scale formulation capable of addressing any drying configuration. Simulation results reported for a sample of spruce highlight the potential and flexibility of the new dual{scale approach. In particular, for a given unit cell configuration it is not necessary to supply the effective coefficients prior to each simulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIM: Zhi Zhu Wan (ZZW) is a classical Chinese medical formulation used for the treatment of functional dyspepsia that attributed to Spleen-deficiency Syndrome. ZZW contains Atractylodes Rhizome and Fructus Citrus Immaturus, the later originates from both Citrus aurantium L. (BZZW) and Citrus sinensis Osbeck (RZZW). The present study is designed to elucidate disparities in the clinical efficacy of two ZZW varieties based on the pharmacokinetics of naringenin and hesperetin. MEHTOD: After oral administration of ZZWs, blood sample was collected from healthy volunteers at designed time points. Naringenin and hesperetin were detected in plasma by RP-HPLC, pharmacokinetic parameters were processed using mode-independent methods with WINNONLIN. RESULTS: After oral administration of BZZW, both naringenin and hesperetin were detected in plasma, and demonstrated similar pharmacokinetic parameters. Ka was 0.384+/-0.165 and 0.401+/-0.159, T(1/2(ke))(h) was 5.491+/-3.926 and 5.824+/-3.067, the AUC (mg/Lh) was 34.886+/-22.199 and 39.407+/-19.535 for naringenin and hesperetin, respectively. However, in the case of RZZW, only hesperetin was found in plasma, but the pharmacokinetic properties for hesperetin in RZZW was different from that in BZZW. T(max) for hesperetin in RZZW is about 8.515h, and its C(max) is much larger than that of BZZW. Moreover, it was eliminated slowly as it possessed a much larger AUC value. CONCLUSION: The distinct therapeutic orientations of the Chinese medical formula ZZWs with different Fructus Citrus Immaturus could be elucidated based on the pharmacokinetic parameters of constituents after oral administration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: This study investigated the effect of chemical conjugation of the amino acid L-leucine to the polysaccharide chitosan on the dispersibility and drug release pattern of a polymeric nanoparticle (NP)-based controlled release dry powder inhaler (DPI) formulation. Methods: A chemical conjugate of L-leucine with chitosan was synthesized and characterized by Infrared (IR) Spectroscopy, Nuclear Magnetic Resonance (NMR) Spectroscopy, Elemental Analysis and X-ray Photoelectron Spectroscopy (XPS). Nanoparticles of both chitosan and its conjugate were prepared by a water-in-oil emulsification – glutaraldehyde cross-linking method using the antihypertensive agent, diltiazem (Dz) hydrochloride as the model drug. The surface morphology and particle size distribution of the nanoparticles were determined by Scanning Electron Microscopy (SEM) and Dynamic Light Scattering (DLS). The dispersibility of the nanoparticle formulation was analysed by a Twin Stage Impinger (TSI) with a Rotahaler as the DPI device. Deposition of the particles in the different stages was determined by gravimetry and the amount of drug released was analysed by UV spectrophotometry. The release profile of the drug was studied in phosphate buffered saline at 37 ⁰C and analyzed by UV spectrophotometry. Results: The TSI study revealed that the fine particle fractions (FPF), as determined gravimetrically, for empty and drug-loaded conjugate nanoparticles were significantly higher than for the corresponding chitosan nanoparticles (24±1.2% and 21±0.7% vs 19±1.2% and 15±1.5% respectively; n=3, p<0.05). The FPF of drug-loaded chitosan and conjugate nanoparticles, in terms of the amount of drug determined spectrophotometrically, had similar values (21±0.7% vs 16±1.6%). After an initial burst, both chitosan and conjugate nanoparticles showed controlled release that lasted about 8 to 10 days, but conjugate nanoparticles showed twice as much total drug release compared to chitosan nanoparticles (~50% vs ~25%). Conjugate nanoparticles also showed significantly higher dug loading and entrapment efficiency than chitosan nanoparticles (conjugate: 20±1% & 46±1%, chitosan: 16±1% & 38±1%, n=3, p<0.05). Conclusion: Although L-leucine conjugation to chitosan increased dispersibility of formulated nanoparticles, the FPF values are still far from optimum. The particles showed a high level of initial burst release (chitosan, 16% and conjugate, 31%) that also will need further optimization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a higher-order beam-column formulation that can capture the geometrically non-linear behaviour of steel framed structures which contain a multiplicity of slender members. Despite advances in computational frame software, analyses of large frames can still be problematic from a numerical standpoint and so the intent of the paper is to fulfil a need for versatile, reliable and efficient non-linear analysis of general steel framed structures with very many members. Following a comprehensive review of numerical frame analysis techniques, a fourth-order element is derived and implemented in an updated Lagrangian formulation, and it is able to predict flexural buckling, snap-through buckling and large displacement post-buckling behaviour of typical structures whose responses have been reported by independent researchers. The solutions are shown to be efficacious in terms of a balance of accuracy and computational expediency. The higher-order element forms a basis for augmenting the geometrically non-linear approach with material non-linearity through the refined plastic hinge methodology described in the companion paper.