899 resultados para Arbitrary dimension
Resumo:
Transportation disadvantage has been recognised to be the key source of social exclusion. Therefore an appropriate process is required to investigate and seek to resolve this problem. Currently, determination of Transportation Disadvantage is postulate based on income, poverty and mobility level. Transportation disadvantage may best regard be based on accessibility perspectives as they represent inability of the individual to access desired activities. This paper attempts to justify a process in determining transportation disadvantage by incorporating accessibility and social transporation conflict as the essence of a framework. The framework embeds space time organisation within the dimension of accessibility to identify a rigorous definition of transportation disadvantage. In developing the framework, the definition, dimension, component and measure of accessibility were scrutinised. The findings suggest the definition and dimension are the significant approach of research to evaluate travel experience of the disadvantaged. Concurrently, location accessibility measures will be incorprated to strenghten the determination of accessibility level. Literature review in social exclusion and mobility-related exclusion identified the dimension and source of transportation disadvantage. It was revealed that the appropriate approach to justify trasnportation disadvantaged is to incorporate space-time organisation within the studied components. The suggested framework is an inter-related process consisting of component of accessibility; individual, networking (transport system) and activities (destination). The integration and correlation among the components shall determine the level of transportation disadvantage. Prior findings are used to retrieve the spatial distribution of transportation disadvantaged and appropriate policies are developed to resolve the problems.
Resumo:
Many interesting phenomena have been observed in layers of granular materials subjected to vertical oscillations; these include the formation of a variety of standing wave patterns, and the occurrence of isolated features called oscillons, which alternately form conical heaps and craters oscillating at one-half of the forcing frequency. No continuum-based explanation of these phenomena has previously been proposed. We apply a continuum theory, termed the double-shearing theory, which has had success in analyzing various problems in the flow of granular materials, to the problem of a layer of granular material on a vertically vibrating rigid base undergoing vertical oscillations in plane strain. There exists a trivial solution in which the layer moves as a rigid body. By investigating linear perturbations of this solution, we find that at certain amplitudes and frequencies this trivial solution can bifurcate. The time dependence of the perturbed solution is governed by Mathieu’s equation, which allows stable, unstable and periodic solutions, and the observed period-doubling behaviour. Several solutions for the spatial velocity distribution are obtained; these include one in which the surface undergoes vertical velocities that have sinusoidal dependence on the horizontal space dimension, which corresponds to the formation of striped standing waves, and is one of the observed patterns. An alternative continuum theory of granular material mechanics, in which the principal axes of stress and rate-of-deformation are coincident, is shown to be incapable of giving rise to similar instabilities.
Resumo:
The population Monte Carlo algorithm is an iterative importance sampling scheme for solving static problems. We examine the population Monte Carlo algorithm in a simplified setting, a single step of the general algorithm, and study a fundamental problem that occurs in applying importance sampling to high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of estimate under conditions on the importance function. We demonstrate the exponential growth of the asymptotic variance with the dimension and show that the optimal covariance matrix for the importance function can be estimated in special cases.
Resumo:
We consider a new form of authenticated key exchange which we call multi-factor password-authenticated key exchange, where session establishment depends on successful authentication of multiple short secrets that are complementary in nature, such as a long-term password and a one-time response, allowing the client and server to be mutually assured of each other's identity without directly disclosing private information to the other party. Multi-factor authentication can provide an enhanced level of assurance in higher-security scenarios such as online banking, virtual private network access, and physical access because a multi-factor protocol is designed to remain secure even if all but one of the factors has been compromised. We introduce a security model for multi-factor password-authenticated key exchange protocols, propose an efficient and secure protocol called MFPAK, and provide a security argument to show that our protocol is secure in this model. Our security model is an extension of the Bellare-Pointcheval-Rogaway security model for password-authenticated key exchange and accommodates an arbitrary number of symmetric and asymmetric authentication factors.
Resumo:
Multi-storey buildings are highly vulnerable to terrorist bombing attacks in various parts of the world. Large numbers of casualties and extensive property damage result not only from blast overpressure, but also from the failing of structural components. Understanding the blast response and damage consequences of reinforced concrete (RC) building frames is therefore important when assessing multi-storey buildings designed to resist normal gravity loads. However, limited research has been conducted to identify the blast response and damage of RC frames in order to assess the vulnerability of entire buildings. This paper discusses the blast response and evaluation of damage of three-dimension (3D) RC rigid frame under potential blast loads scenarios. The explicit finite element modelling and analysis under time history blast pressure loads were carried out by LS DYNA code. Complete 3D RC frame was developed with relevant reinforcement details and material models with strain rate effect. Idealised triangular blast pressures calculated from standard manuals are applied on the front face of the model in the present investigation. The analysis results show the blast response, as displacements and material yielding of the structural elements in the RC frame. The level of damage is evaluated and classified according to the selected load case scenarios. Residual load carrying capacities are evaluated and level of damage was presented by the defined damage indices. This information is necessary to determine the vulnerability of existing multi-storey buildings with RC frames and to identify the level of damage under typical external explosion environments. It also provides basic guidance to the design of new buildings to resist blast loads.
Resumo:
A major focus of research in nanotechnology is the development of novel, high throughput techniques for fabrication of arbitrarily shaped surface nanostructures of sub 100 nm to atomic scale. A related pursuit is the development of simple and efficient means for parallel manipulation and redistribution of adsorbed atoms, molecules and nanoparticles on surfaces – adparticle manipulation. These techniques will be used for the manufacture of nanoscale surface supported functional devices in nanotechnologies such as quantum computing, molecular electronics and lab-on-achip, as well as for modifying surfaces to obtain novel optical, electronic, chemical, or mechanical properties. A favourable approach to formation of surface nanostructures is self-assembly. In self-assembly, nanostructures are grown by aggregation of individual adparticles that diffuse by thermally activated processes on the surface. The passive nature of this process means it is generally not suited to formation of arbitrarily shaped structures. The self-assembly of nanostructures at arbitrary positions has been demonstrated, though these have typically required a pre-patterning treatment of the surface using sophisticated techniques such as electron beam lithography. On the other hand, a parallel adparticle manipulation technique would be suited for directing the selfassembly process to occur at arbitrary positions, without the need for pre-patterning the surface. There is at present a lack of techniques for parallel manipulation and redistribution of adparticles to arbitrary positions on the surface. This is an issue that needs to be addressed since these techniques can play an important role in nanotechnology. In this thesis, we propose such a technique – thermal tweezers. In thermal tweezers, adparticles are redistributed by localised heating of the surface. This locally enhances surface diffusion of adparticles so that they rapidly diffuse away from the heated regions. Using this technique, the redistribution of adparticles to form a desired pattern is achieved by heating the surface at specific regions. In this project, we have focussed on the holographic implementation of this approach, where the surface is heated by holographic patterns of interfering pulsed laser beams. This implementation is suitable for the formation of arbitrarily shaped structures; the only condition is that the shape can be produced by holographic means. In the simplest case, the laser pulses are linearly polarised and intersect to form an interference pattern that is a modulation of intensity along a single direction. Strong optical absorption at the intensity maxima of the interference pattern results in approximately a sinusoidal variation of the surface temperature along one direction. The main aim of this research project is to investigate the feasibility of the holographic implementation of thermal tweezers as an adparticle manipulation technique. Firstly, we investigate theoretically the surface diffusion of adparticles in the presence of sinusoidal modulation of the surface temperature. Very strong redistribution of adparticles is predicted when there is strong interaction between the adparticle and the surface, and the amplitude of the temperature modulation is ~100 K. We have proposed a thin metallic film deposited on a glass substrate heated by interfering laser beams (optical wavelengths) as a means of generating very large amplitude of surface temperature modulation. Indeed, we predict theoretically by numerical solution of the thermal conduction equation that amplitude of the temperature modulation on the metallic film can be much greater than 100 K when heated by nanosecond pulses with an energy ~1 mJ. The formation of surface nanostructures of less than 100 nm in width is predicted at optical wavelengths in this implementation of thermal tweezers. Furthermore, we propose a simple extension to this technique where spatial phase shift of the temperature modulation effectively doubles or triples the resolution. At the same time, increased resolution is predicted by reducing the wavelength of the laser pulses. In addition, we present two distinctly different, computationally efficient numerical approaches for theoretical investigation of surface diffusion of interacting adparticles – the Monte Carlo Interaction Method (MCIM) and the random potential well method (RPWM). Using each of these approaches we have investigated thermal tweezers for redistribution of both strongly and weakly interacting adparticles. We have predicted that strong interactions between adparticles can increase the effectiveness of thermal tweezers, by demonstrating practically complete adparticle redistribution into the low temperature regions of the surface. This is promising from the point of view of thermal tweezers applied to directed self-assembly of nanostructures. Finally, we present a new and more efficient numerical approach to theoretical investigation of thermal tweezers of non-interacting adparticles. In this approach, the local diffusion coefficient is determined from solution of the Fokker-Planck equation. The diffusion equation is then solved numerically using the finite volume method (FVM) to directly obtain the probability density of adparticle position. We compare predictions of this approach to those of the Ermak algorithm solution of the Langevin equation, and relatively good agreement is shown at intermediate and high friction. In the low friction regime, we predict and investigate the phenomenon of ‘optimal’ friction and describe its occurrence due to very long jumps of adparticles as they diffuse from the hot regions of the surface. Future research directions, both theoretical and experimental are also discussed.
Resumo:
An alternative approach to port decoupling and matching of arrays with tightly coupled elements is proposed. The method is based on the inherent decoupling effect obtained by feeding the orthogonal eigenmodes of the array. For this purpose, a modal feed network is connected to the array. The decoupled external ports of the feed network may then be matched independently by using conventional matching circuits. Such a system may be used in digital beam forming applications with good signal-to-noise performance. The theory is applicable to arrays with an arbitrary number of elements, but implementation is only practical for smaller arrays. The principle is illustrated by means of two examples.
Resumo:
Industrial applications of the simulated-moving-bed (SMB) chromatographic technology have brought an emergent demand to improve the SMB process operation for higher efficiency and better robustness. Improved process modelling and more-efficient model computation will pave a path to meet this demand. However, the SMB unit operation exhibits complex dynamics, leading to challenges in SMB process modelling and model computation. One of the significant problems is how to quickly obtain the steady state of an SMB process model, as process metrics at the steady state are critical for process design and real-time control. The conventional computation method, which solves the process model cycle by cycle and takes the solution only when a cyclic steady state is reached after a certain number of switching, is computationally expensive. Adopting the concept of quasi-envelope (QE), this work treats the SMB operation as a pseudo-oscillatory process because of its large number of continuous switching. Then, an innovative QE computation scheme is developed to quickly obtain the steady state solution of an SMB model for any arbitrary initial condition. The QE computation scheme allows larger steps to be taken for predicting the slow change of the starting state within each switching. Incorporating with the wavelet-based technique, this scheme is demonstrated to be effective and efficient for an SMB sugar separation process. Moreover, investigations are also carried out on when the computation scheme should be activated and how the convergence of the scheme is affected by a variable stepsize.
Resumo:
UCON is an emerging access control framework that lacks an administration model. In this paper we define the problem of administration and propose a novel administrative model. At the core of this model is the concept of attribute, which is also the central component of UCON. In our model, attributes are created by the assertions of subjects, which ascribe properties/rights to other subjects or objects. Through such a treatment of attributes, administration capabilities can be delegated from one subject to another and as a consequence UCON is improved in three aspects. First, immutable attributes that are currently considered as external to the model can be incorporated and thereby treated as mutable at- tributes. Second, the current arbitrary categorisation of users (as modifiers of attributes), to system and administrator can be removed. Attributes and objects are only modifiable by those who possess administration capability over them. Third, the delegation of administration over objects and properties that is not currently expressible in UCON is made possible.
Resumo:
Purpose: To evaluate the psychometric properties of a Chinese version of the Diabetes Coping Measure (DCM-C) scale.----- Methods: A self-administered questionnaire was completed by 205 people with type 2 diabetes from the endocrine outpatient departments of three hospitals in Taiwan. Confirmatory factor analysis, criterion validity, and internal consistency reliability were conducted to evaluate the psychometric properties of the DCM-C.----- Findings: Confirmatory factor analysis confirmed a four-factor structure (χ2 /df ratio=1.351, GFI=.904, CFI=.902, RMSEA=.041). The DCM-C was significantly associated with HbA1c and diabetes self-care behaviors. Internal consistency reliability of the total DCM-C scale was .74. Cronbach’s alpha coefficients for each subscale of the DCM-C ranged from .37 (tackling spirit) to .66 (diabetes integration).----- Conclusions: The DCM-C demonstrated satisfactory reliability and validity to determine the use of diabetes coping strategies. The tackling spirit dimension needs further refinement when applies this scale to Chinese populations with diabetes.----- Clinical Relevance: Healthcare providers who deal with Chinese people with diabetes can use the DCM-C to implement an early determination of diabetes coping strategies.
Resumo:
This paper proposes a clustered approach for blind beamfoming from ad-hoc microphone arrays. In such arrangements, microphone placement is arbitrary and the speaker may be close to one, all or a subset of microphones at a given time. Practical issues with such a configuration mean that some microphones might be better discarded due to poor input signal to noise ratio (SNR) or undesirable spatial aliasing effects from large inter-element spacings when beamforming. Large inter-microphone spacings may also lead to inaccuracies in delay estimation during blind beamforming. In such situations, using a cluster of microphones (ie, a sub-array), closely located both to each other and to the desired speech source, may provide more robust enhancement than the full array. This paper proposes a method for blind clustering of microphones based on the magnitude square coherence function, and evaluates the method on a database recorded using various ad-hoc microphone arrangements.
Resumo:
There is not a single, coherent, jurisprudence for civil society organisations. Pressure for a clearly enuciated body of law applying to the whole of this sector of society continues to increase. The rise of third sector scholarship, the retreat of the welfare state, the rediscovery of the concept of civil society and pressures to strengthen social capital have all contributed to an ongoing stream of inquiry into the laws that regulate and favour civil society organisations. There have been almost thirty inquiries over the last sixty years into the doctrine of charitable purpose in common law countries. Those inquiries have established that problems with the law applying to civil society organisations are rooted in the common law adopting a ‘technical’ definition of charitable purpose and the failure of this body of law to develop in response to societal changes. Even though it is now well recognised that problems with law reform stem from problems inherent in the doctrine of charitable purpose, statutory reforms have merely ‘bolted on’ additions to the flawed ‘technical’ definition. In this way the scope of operation of the law has been incrementally expanded to include a larger number of civil society organisations. This piecemeal approach continues the exclusion of most civil society organisations from the law of charities discourse, and fails to address the underlying jurisprudential problems. Comprehensive reform requires revisiting the foundational problems embedded in the doctrine of charitable purpose, being informed by recent scholarship, and a paradigm shift that extends the doctrine to include all civil society organisations. Scholarly inquiry into civil society organisations, particularly from within the discipline of neoclassical economics, has elucidated insights that can inform legal theory development. This theory development requires decoupling the two distinct functions performed by the doctrine of charitable purpose which are: setting the scope of regulation, and determining entitlement to favours, such as tax exemption. If the two different functions of the doctrine are considered separately in the light of theoretical insights from other disciplines, the architecture for a jurisprudence emerges that facilitates regulation, but does not necessarily favour all civil society organisations. Informed by that broader discourse it is argued that when determining the scope of regulation, civil society organisations are identified by reference to charitable purposes that are not technically defined. These charitable purposes are in essence purposes which are: Altruistic, for public Benefit, pursued without Coercion. These charitable puposes differentiate civil society organisations from organisations in the three other sectors namely; Business, which is manifest in lack of altruism; Government, which is characterised by coercion; and Family, which is characterised by benefits being private not public. When determining entitlement to favour, it is theorised that it is the extent or nature of the public benefit evident in the pursuit of a charitable purpose that justifies entitlement to favour. Entitlement to favour based on the extent of public benefit is the theoretically simpler – the greater the public benefit the greater the justification for favour. To be entitled to favour based on the nature of a purpose being charitable the purpose must fall within one of three categories developed from the first three heads of Pemsel’s case (the landmark categorisation case on taxation favour). The three categories proposed are: Dealing with Disadvantage, Encouraging Edification; and Facilitating Freedom. In this alternative paradigm a recast doctrine of charitable purpose underpins a jurisprudence for civil society in a way similar to the way contract underpins the jurisprudence for the business sector, the way that freedom from arbitrary coercion underpins the jurisprudence of the government sector and the way that equity within families underpins succession and family law jurisprudence for the family sector. This alternative architecture for the common law, developed from the doctrine of charitable purpose but inclusive of all civil society purposes, is argued to cover the field of the law applying to civil society organisations and warrants its own third space as a body of law between public law and private law in jurisprudence.