969 resultados para Sophisticated voting
Resumo:
In many modeling situations in which parameter values can only be estimated or are subject to noise, the appropriate mathematical representation is a stochastic ordinary differential equation (SODE). However, unlike the deterministic case in which there are suites of sophisticated numerical methods, numerical methods for SODEs are much less sophisticated. Until a recent paper by K. Burrage and P.M. Burrage (1996), the highest strong order of a stochastic Runge-Kutta method was one. But K. Burrage and P.M. Burrage (1996) showed that by including additional random variable terms representing approximations to the higher order Stratonovich (or Ito) integrals, higher order methods could be constructed. However, this analysis applied only to the one Wiener process case. In this paper, it will be shown that in the multiple Wiener process case all known stochastic Runge-Kutta methods can suffer a severe order reduction if there is non-commutativity between the functions associated with the Wiener processes. Importantly, however, it is also suggested how this order can be repaired if certain commutator operators are included in the Runge-Kutta formulation. (C) 1998 Elsevier Science B.V. and IMACS. All rights reserved.
Resumo:
Concepts used in this chapter include: Thermoregulation:- Thermoregulation refers to the body’s sophisticated, multi-system regulation of core body temperature. This hierarchical system extends from highly thermo-sensitive neurons in the preoptic region of the brain proximate to the rostral hypothalamus, down to the brain stem and spinal cord. Coupled with receptors in the skin and spine, both central and peripheral information on body temperature is integrated to inform and activate the homeostatic mechanisms which maintain our core temperature at 37oC1. Hyperthermia:- An imbalance between the metabolic and external heat accumulated in the body and the loss of heat from the body2. Exertional heat stroke:- A disorder of excessive heat production coupled with insufficient heat dissipation which occurs in un-acclimated individuals who are engaging in over-exertion in hot and humid conditions. This phenomenon includes central nervous system dysfunction and critical dysfunction to all organ systems including renal, cardiovascular, musculoskeletal and hepatic functions. Non-exertional heat stroke:- In contrast to exertional heatstroke as a consequence of high heat production during strenuous exercise, non-exertional heatstroke results from prolonged exposure to high ambient temperature. The elderly, those with chronic health conditions and children are particularly susceptible.3 Rhabdomylosis:- An acute, sometimes fatal disease characterised by destruction of skeletal muscle. In exertional heat stroke, rhabdomylosis occurs in the context of strenuous exercise when mechanical and/or metabolic stress damages the skeletal muscle, causing elevated serum creatine kinease. Associated with this is the potential development of hyperkalemia, myoglobinuria and renal failure. Malignant hyperthermia:- Malignant hyperthermia is “an inherited subclinical myopathy characterised by a hypermetabolic reaction during anaesthesia. The reaction is related to skeletal muscle calcium dysregulation triggered by volatile inhaled anaesthetics and/or succinylcholine.”4 Presentation includes skeletal muscle rigidity, mixed metabolic and respiratory acidosis, tachycardia, hyperpyrexia, rhabdomylosis, hyperkalaemia, elevated serum creatine kinease, multi-organ failure, disseminated intravascular coagulation and death.5
Resumo:
Concepts used in this chapter include: Thermoregulation:- Thermoregulation refers to the body’s sophisticated, multi-system regulation of core body temperature. This hierarchical system extends from highly thermo-sensitive neurons in the preoptic region of the brain proximate to the rostral hypothalamus, down to the brain stem and spinal cord. Coupled with receptors in the skin and spine, both central and peripheral information on body temperature is integrated to inform and activate the homeostatic mechanisms which maintain our core temperature at 37oC.1 Body heat is lost through the skin, via respiration and excretions. The skin is perhaps the most important organ in regulating heat loss. Hyporthermia:- Hypothermia is defined as core body temperature less than 350C and is the result of imbalance between the body’s heat production and heat loss mechanisms. Hypothermia may be accidental, or induced for clinical benefit i.e: neurological protection (therapeutic hypothermia). External environmental conditions are the most common cause of accidental hypothermia, but not the only causes of hypothermia in humans. Other causes include metabolic imbalance; trauma; neurological and infectious disease; and exposure to toxins such as organophosphates. Therapeutic Hypothermia:- In some circumstances, hypothermia can be induced to protect neurological functioning as a result of the associated decrease in cerebral metabolism and energy consumption. Reduction in the extent of degenerative processes associated with periods of ischaemia such as excitotoxic cascade; apoptotic and necrotic cell death; microglial activation; oxidative stress and inflammation associated with ischaemia are averted or minimised.2 Mild hypothermia is the only effective treatment confirmed clinically for improving the neurological outcomes of patient’s comatose following cardiac arrest.3
Resumo:
Internet services are important part of daily activities for most of us. These services come with sophisticated authentication requirements which may not be handled by average Internet users. The management of secure passwords for example creates an extra overhead which is often neglected due to usability reasons. Furthermore, password-based approaches are applicable only for initial logins and do not protect against unlocked workstation attacks. In this paper, we provide a non-intrusive identity verification scheme based on behavior biometrics where keystroke dynamics based-on free-text is used continuously for verifying the identity of a user in real-time. We improved existing keystroke dynamics based verification schemes in four aspects. First, we improve the scalability where we use a constant number of users instead of whole user space to verify the identity of target user. Second, we provide an adaptive user model which enables our solution to take the change of user behavior into consideration in verification decision. Next, we identify a new distance measure which enables us to verify identity of a user with shorter text. Fourth, we decrease the number of false results. Our solution is evaluated on a data set which we have collected from users while they were interacting with their mail-boxes during their daily activities.
Resumo:
Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.
Resumo:
Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.
Resumo:
For the timber industry, the ability to simulate the drying of wood is invaluable for manufacturing high quality wood products. Mathematically, however, modelling the drying of a wet porous material, such as wood, is a diffcult task due to its heterogeneous and anisotropic nature, and the complex geometry of the underlying pore structure. The well{ developed macroscopic modelling approach involves writing down classical conservation equations at a length scale where physical quantities (e.g., porosity) can be interpreted as averaged values over a small volume (typically containing hundreds or thousands of pores). This averaging procedure produces balance equations that resemble those of a continuum with the exception that effective coeffcients appear in their deffnitions. Exponential integrators are numerical schemes for initial value problems involving a system of ordinary differential equations. These methods differ from popular Newton{Krylov implicit methods (i.e., those based on the backward differentiation formulae (BDF)) in that they do not require the solution of a system of nonlinear equations at each time step but rather they require computation of matrix{vector products involving the exponential of the Jacobian matrix. Although originally appearing in the 1960s, exponential integrators have recently experienced a resurgence in interest due to a greater undertaking of research in Krylov subspace methods for matrix function approximation. One of the simplest examples of an exponential integrator is the exponential Euler method (EEM), which requires, at each time step, approximation of φ(A)b, where φ(z) = (ez - 1)/z, A E Rnxn and b E Rn. For drying in porous media, the most comprehensive macroscopic formulation is TransPore [Perre and Turner, Chem. Eng. J., 86: 117-131, 2002], which features three coupled, nonlinear partial differential equations. The focus of the first part of this thesis is the use of the exponential Euler method (EEM) for performing the time integration of the macroscopic set of equations featured in TransPore. In particular, a new variable{ stepsize algorithm for EEM is presented within a Krylov subspace framework, which allows control of the error during the integration process. The performance of the new algorithm highlights the great potential of exponential integrators not only for drying applications but across all disciplines of transport phenomena. For example, when applied to well{ known benchmark problems involving single{phase liquid ow in heterogeneous soils, the proposed algorithm requires half the number of function evaluations than that required for an equivalent (sophisticated) Newton{Krylov BDF implementation. Furthermore for all drying configurations tested, the new algorithm always produces, in less computational time, a solution of higher accuracy than the existing backward Euler module featured in TransPore. Some new results relating to Krylov subspace approximation of '(A)b are also developed in this thesis. Most notably, an alternative derivation of the approximation error estimate of Hochbruck, Lubich and Selhofer [SIAM J. Sci. Comput., 19(5): 1552{1574, 1998] is provided, which reveals why it performs well in the error control procedure. Two of the main drawbacks of the macroscopic approach outlined above include the effective coefficients must be supplied to the model, and it fails for some drying configurations, where typical dual{scale mechanisms occur. In the second part of this thesis, a new dual{scale approach for simulating wood drying is proposed that couples the porous medium (macroscale) with the underlying pore structure (microscale). The proposed model is applied to the convective drying of softwood at low temperatures and is valid in the so{called hygroscopic range, where hygroscopically held liquid water is present in the solid phase and water exits only as vapour in the pores. Coupling between scales is achieved by imposing the macroscopic gradient on the microscopic field using suitably defined periodic boundary conditions, which allows the macroscopic ux to be defined as an average of the microscopic ux over the unit cell. This formulation provides a first step for moving from the macroscopic formulation featured in TransPore to a comprehensive dual{scale formulation capable of addressing any drying configuration. Simulation results reported for a sample of spruce highlight the potential and flexibility of the new dual{scale approach. In particular, for a given unit cell configuration it is not necessary to supply the effective coefficients prior to each simulation.
Resumo:
Bridges are currently rated individually for maintenance and repair action according to the structural conditions of their elements. Dealing with thousands of bridges and the many factors that cause deterioration, makes this rating process extremely complicated. The current simplified but practical methods are not accurate enough. On the other hand, the sophisticated, more accurate methods are only used for a single or particular bridge type. It is therefore necessary to develop a practical and accurate rating system for a network of bridges. The first most important step in achieving this aim is to classify bridges based on the differences in nature and the unique characteristics of the critical factors and the relationship between them, for a network of bridges. Critical factors and vulnerable elements will be identified and placed in different categories. This classification method will be used to develop a new practical rating method for a network of railway bridges based on criticality and vulnerability analysis. This rating system will be more accurate and economical as well as improve the safety and serviceability of railway bridges.
Resumo:
Railway bridges deteriorate with age. Factors such as environmental effects on different materials of a bridge, variation of loads, fatigue, etc will reduce the remaining life of bridges. Bridges are currently rated individually for maintenance and repair actions according to the structural conditions of their elements. Dealing with thousands of bridges and several factors that cause deterioration, makes the rating process extremely complicated. Current simplified but practical rating methods are not based on an accurate structural condition assessment system. On the other hand, the sophisticated but more accurate methods are only used for a single bridge or particular types of bridges. It is therefore necessary to develop a practical and accurate system which will be capable of rating a network of railway bridges. This paper introduces a new method for rating a network of bridges based on their current and future structural conditions. The method identifies typical bridges representing a group of railway bridges. The most crucial agents will be determined and categorized to criticality and vulnerability factors. Classification based on structural configuration, loading, and critical deterioration factors will be conducted. Finally a rating method for a network of railway bridges that takes into account the effects of damaged structural components due to variations in loading and environmental conditions on the integrity of the whole structure will be proposed. The outcome of this research is expected to significantly improve the rating methods for railway bridges by considering the unique characteristics of different factors and incorporating the correlation between them.
Resumo:
Railway bridges deteriorate with age. Factors such as environmental effects on different materials of a bridge, variation of loads, fatigue, etc. will reduce the remaining life of bridges. Dealing with thousands of bridges and several factors that cause deterioration, makes the rating process extremely complicated. Current simplified but practical methods of rating a network of bridges are not based on an accurate structural condition assessment system. On the other hand, the sophisticated but more accurate methods are only used for a single bridge or particular types of bridges. It is therefore necessary to develop a practical and accurate system, which will be capable of rating a network of railway bridges. This article introduces a new method to rate a network of bridges based on their current and future structural conditions. The method identifies typical bridges representing a group of railway bridges. The most crucial agents will be determined and categorized to criticality and vulnerability factors. Classification based on structural configuration, loading, and critical deterioration factors will be conducted. Finally a rating method for a network of railway bridges that takes into account the effects of damaged structural components due to variations in loading and environmental conditions on the integrity of the whole structure will be proposed. The outcome of this article is expected to significantly improve the rating methods for railway bridges by considering the unique characteristics of different factors and incorporating the correlation among them.
Resumo:
The trial in Covecorp Constructions Pty Ltd v Indigo Projects Pty Ltd (File no BS 10157 of 2001; BS 2763 of 2002) commenced on 8 October 2007 before Fryberg J, but the matter settled on 6 November 2007 before the conclusion of the trial. This case was conducted as an “electronic trial” with the use of technology developed within the court. This was the first case in Queensland to employ this technology at trial level. The Court’s aim was to find a means to capture the key benefits which are offered by the more sophisticated trial presentation software of commercial service providers, in a way that was inexpensive for the parties and would facilitate the adoption of technology at trial much more broadly than has been the case to date.
Resumo:
Sophisticated models of human social behaviour are fast becoming highly desirable in an increasingly complex and interrelated world. Here, we propose that rather than taking established theories from the physical sciences and naively mapping them into the social world, the advanced concepts and theories of social psychology should be taken as a starting point, and used to develop a new modelling methodology. In order to illustrate how such an approach might be carried out, we attempt to model the low elaboration attitude changes of a society of agents in an evolving social context. We propose a geometric model of an agent in context, where individual agent attitudes are seen to self-organise to form ideologies, which then serve to guide further agent-based attitude changes. A computational implementation of the model is shown to exhibit a number of interesting phenomena, including a tendency for a measure of the entropy in the system to decrease, and a potential for externally guiding a population of agents towards a new desired ideology.
Resumo:
The perennial issues of student engagement, success and retention in higher education continue to attract attention as the salience of teaching and learning funding and performance measures has increased. This paper addresses the question of the responsibility or place of higher education institutions (HEIs) for initiating, planning, managing and evaluating their student engagement, success and retention programs and strategies. An evaluation of the current situation indicates the need for a sophisticated approach to assessing the ability of HEIs to proactively design programs and practices that enhance student engagement. An approach—the Student Engagement Success and Retention Maturity Model (SESR-MM)—is proposed and its development, current status, and relationship with and possible use in benchmarking are discussed.
Resumo:
Situated on Youtube, and shown in various locations. In this video we show a 3D mock up of a personal house purchasing process. A path traversal metaphor is used to give a sense of progression along the process stages. The intention is to be able to use console devices like an Xbox to consume business processes. This is so businesses can expose their internal processes to consumers using sophisticated user interfaces. The demonstrator was developed using Microsoft XNA, with assistance from the Suncorp Bank and the Smart Services CRC. More information at: www.bpmve.org
Resumo:
Controlled drug delivery is a key topic in modern pharmacotherapy, where controlled drug delivery devices are required to prolong the period of release, maintain a constant release rate, or release the drug with a predetermined release profile. In the pharmaceutical industry, the development process of a controlled drug delivery device may be facilitated enormously by the mathematical modelling of drug release mechanisms, directly decreasing the number of necessary experiments. Such mathematical modelling is difficult because several mechanisms are involved during the drug release process. The main drug release mechanisms of a controlled release device are based on the device’s physiochemical properties, and include diffusion, swelling and erosion. In this thesis, four controlled drug delivery models are investigated. These four models selectively involve the solvent penetration into the polymeric device, the swelling of the polymer, the polymer erosion and the drug diffusion out of the device but all share two common key features. The first is that the solvent penetration into the polymer causes the transition of the polymer from a glassy state into a rubbery state. The interface between the two states of the polymer is modelled as a moving boundary and the speed of this interface is governed by a kinetic law. The second feature is that drug diffusion only happens in the rubbery region of the polymer, with a nonlinear diffusion coefficient which is dependent on the concentration of solvent. These models are analysed by using both formal asymptotics and numerical computation, where front-fixing methods and the method of lines with finite difference approximations are used to solve these models numerically. This numerical scheme is conservative, accurate and easily implemented to the moving boundary problems and is thoroughly explained in Section 3.2. From the small time asymptotic analysis in Sections 5.3.1, 6.3.1 and 7.2.1, these models exhibit the non-Fickian behaviour referred to as Case II diffusion, and an initial constant rate of drug release which is appealing to the pharmaceutical industry because this indicates zeroorder release. The numerical results of the models qualitatively confirms the experimental behaviour identified in the literature. The knowledge obtained from investigating these models can help to develop more complex multi-layered drug delivery devices in order to achieve sophisticated drug release profiles. A multi-layer matrix tablet, which consists of a number of polymer layers designed to provide sustainable and constant drug release or bimodal drug release, is also discussed in this research. The moving boundary problem describing the solvent penetration into the polymer also arises in melting and freezing problems which have been modelled as the classical onephase Stefan problem. The classical one-phase Stefan problem has unrealistic singularities existed in the problem at the complete melting time. Hence we investigate the effect of including the kinetic undercooling to the melting problem and this problem is called the one-phase Stefan problem with kinetic undercooling. Interestingly we discover the unrealistic singularities existed in the classical one-phase Stefan problem at the complete melting time are regularised and also find out the small time behaviour of the one-phase Stefan problem with kinetic undercooling is different to the classical one-phase Stefan problem from the small time asymptotic analysis in Section 3.3. In the case of melting very small particles, it is known that surface tension effects are important. The effect of including the surface tension to the melting problem for nanoparticles (no kinetic undercooling) has been investigated in the past, however the one-phase Stefan problem with surface tension exhibits finite-time blow-up. Therefore we investigate the effect of including both the surface tension and kinetic undercooling to the melting problem for nanoparticles and find out the the solution continues to exist until complete melting. The investigation of including kinetic undercooling and surface tension to the melting problems reveals more insight into the regularisations of unphysical singularities in the classical one-phase Stefan problem. This investigation gives a better understanding of melting a particle, and contributes to the current body of knowledge related to melting and freezing due to heat conduction.