900 resultados para linear matrix inequalities (LMIs)
Resumo:
In recent years considerable attention has been paid to the numerical solution of stochastic ordinary differential equations (SODEs), as SODEs are often more appropriate than their deterministic counterparts in many modelling situations. However, unlike the deterministic case numerical methods for SODEs are considerably less sophisticated due to the difficulty in representing the (possibly large number of) random variable approximations to the stochastic integrals. Although Burrage and Burrage [High strong order explicit Runge-Kutta methods for stochastic ordinary differential equations, Applied Numerical Mathematics 22 (1996) 81-101] were able to construct strong local order 1.5 stochastic Runge-Kutta methods for certain cases, it is known that all extant stochastic Runge-Kutta methods suffer an order reduction down to strong order 0.5 if there is non-commutativity between the functions associated with the multiple Wiener processes. This order reduction down to that of the Euler-Maruyama method imposes severe difficulties in obtaining meaningful solutions in a reasonable time frame and this paper attempts to circumvent these difficulties by some new techniques. An additional difficulty in solving SODEs arises even in the Linear case since it is not possible to write the solution analytically in terms of matrix exponentials unless there is a commutativity property between the functions associated with the multiple Wiener processes. Thus in this present paper first the work of Magnus [On the exponential solution of differential equations for a linear operator, Communications on Pure and Applied Mathematics 7 (1954) 649-673] (applied to deterministic non-commutative Linear problems) will be applied to non-commutative linear SODEs and methods of strong order 1.5 for arbitrary, linear, non-commutative SODE systems will be constructed - hence giving an accurate approximation to the general linear problem. Secondly, for general nonlinear non-commutative systems with an arbitrary number (d) of Wiener processes it is shown that strong local order I Runge-Kutta methods with d + 1 stages can be constructed by evaluated a set of Lie brackets as well as the standard function evaluations. A method is then constructed which can be efficiently implemented in a parallel environment for this arbitrary number of Wiener processes. Finally some numerical results are presented which illustrate the efficacy of these approaches. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
Introduction and aims: Individual smokers from disadvantaged backgrounds are less likely to quit, which contributes to widening inequalities in smoking. Residents of disadvantaged neighbourhoods are more likely to smoke, and neighbourhood inequalities in smoking may also be widening because of neighbourhood differences in rates of cessation. This study examined the association between neighbourhood disadvantage and smoking cessation and its relationship with neighbourhood inequalities in smoking. Design and methods: A multilevel longitudinal study of mid-aged (40-67 years) residents (n=6915) of Brisbane, Australia, who lived in the same neighbourhoods (n=200) in 2007 and 2009. Neighbourhood inequalities in cessation and smoking were analysed using multilevel logistic regression and Markov chain Monte Carlo simulation. Results: After adjustment for individual-level socioeconomic factors, the probability of quitting smoking between 2007 and 2009 was lower for residents of disadvantaged neighbourhoods (9.0%-12.8%) than their counterparts in more advantaged neighbourhoods (20.7%-22.5%). These inequalities in cessation manifested in widening inequalities in smoking: in 2007 the between-neighbourhood variance in rates of smoking was 0.242 (p≤0.001) and in 2009 it was 0.260 (p≤0.001). In 2007, residents of the most disadvantaged neighbourhoods were 88% (OR 1.88, 95% CrI 1.41-2.49) more likely to smoke than residents in the least disadvantaged neighbourhoods: the corresponding difference in 2009 was 98% (OR 1.98 95% CrI 1.48-2.66). Conclusion: Fundamentally, social and economic inequalities at the neighbourhood and individual-levels cause smoking and cessation inequalities. Reducing these inequalities will require comprehensive, well-funded, and targeted tobacco control efforts and equity based policies that address the social and economic determinants of smoking.
Resumo:
Background The mechanisms underlying socioeconomic inequalities in mortality from cardiovascular diseases (CVD) are largely unknown. We studied the contribution of childhood socioeconomic conditions and adulthood risk factors to inequalities in CVD mortality in adulthood. Methods The prospective GLOBE study was carried out in the Netherlands, with baseline data from 1991, and linked with the cause of death register in 2007. At baseline, participants reported on adulthood socioeconomic position (SEP) (own educational level), childhood socioeconomic conditions (occupational level of respondent’s father), and a broad range of adulthood risk factors (health behaviours, material circumstances, psychosocial factors). This present study is based on 5,395 men and 6,306 women, and the data were analysed using Cox regression models and hazard ratios (HR). Results A low adulthood SEP was associated with increased CVD mortality for men (HR 1.84; 95% CI: 1.41-2.39) and women (HR 1.80; 95%CI: 1.04-3.10). Those with poorer childhood socioeconomic conditions were more likely to die from CVD in adulthood, but this reached statistical significance only among men with the poorest childhood socioeconomic circumstances. About half of the investigated adulthood risk factors showed significant associations with CVD mortality among both men and women, namely renting a house, experiencing financial problems, smoking, physical activity and marital status. Alcohol consumption and BMI showed a U-shaped relationship with CVD mortality among women, with the risk being significantly greater for both abstainers and heavy drinkers, and among women who were underweight or obese. Among men, being single or divorced and using sleep/anxiety drugs increased the risk of CVD mortality. In explanatory models, the largest contributor to adulthood CVD inequalities were material conditions for men (42%; 95% CI: −73 to −20) and behavioural factors for women (55%; 95% CI: -191 to −28). Simultaneous adjustment for adulthood risk factors and childhood socioeconomic conditions attenuated the HR for the lowest adulthood SEP to 1.34 (95% CI: 0.99-1.82) for men and 1.19 (95% CI: 0.65-2.15) for women. Conclusions Adulthood material, behavioural and psychosocial factors played a major role in the explanation of adulthood SEP inequalities in CVD mortality. Childhood socioeconomic circumstances made a modest contribution, mainly via their association with adulthood risk factors. Policies and interventions to reduce health inequalities are likely to be most effective when considering the influence of socioeconomic circumstances across the entire life course and in particular, poor material conditions and unhealthy behaviours in adulthood.
Resumo:
With the overwhelming increase in the amount of texts on the web, it is almost impossible for people to keep abreast of up-to-date information. Text mining is a process by which interesting information is derived from text through the discovery of patterns and trends. Text mining algorithms are used to guarantee the quality of extracted knowledge. However, the extracted patterns using text or data mining algorithms or methods leads to noisy patterns and inconsistency. Thus, different challenges arise, such as the question of how to understand these patterns, whether the model that has been used is suitable, and if all the patterns that have been extracted are relevant. Furthermore, the research raises the question of how to give a correct weight to the extracted knowledge. To address these issues, this paper presents a text post-processing method, which uses a pattern co-occurrence matrix to find the relation between extracted patterns in order to reduce noisy patterns. The main objective of this paper is not only reducing the number of closed sequential patterns, but also improving the performance of pattern mining as well. The experimental results on Reuters Corpus Volume 1 data collection and TREC filtering topics show that the proposed method is promising.
Resumo:
To recognize faces in video, face appearances have been widely modeled as piece-wise local linear models which linearly approximate the smooth yet non-linear low dimensional face appearance manifolds. The choice of representations of the local models is crucial. Most of the existing methods learn each local model individually meaning that they only anticipate variations within each class. In this work, we propose to represent local models as Gaussian distributions which are learned simultaneously using the heteroscedastic probabilistic linear discriminant analysis (PLDA). Each gallery video is therefore represented as a collection of such distributions. With the PLDA, not only the within-class variations are estimated during the training, the separability between classes is also maximized leading to an improved discrimination. The heteroscedastic PLDA itself is adapted from the standard PLDA to approximate face appearance manifolds more accurately. Instead of assuming a single global within-class covariance, the heteroscedastic PLDA learns different within-class covariances specific to each local model. In the recognition phase, a probe video is matched against gallery samples through the fusion of point-to-model distances. Experiments on the Honda and MoBo datasets have shown the merit of the proposed method which achieves better performance than the state-of-the-art technique.
Resumo:
We test competing linear and curvilinear predictions between board diversity and performance. The predictions were tested using archival data on 288 organizations listed on the Australian Securities Exchange. The findings provide additional evidence on the business case for board gender diversity and refine the business case for board age diversity.
Resumo:
The generation of a correlation matrix from a large set of long gene sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. The generation is not only computationally intensive but also requires significant memory resources as, typically, few gene sequences can be simultaneously stored in primary memory. The standard practice in such computation is to use frequent input/output (I/O) operations. Therefore, minimizing the number of these operations will yield much faster run-times. This paper develops an approach for the faster and scalable computing of large-size correlation matrices through the full use of available memory and a reduced number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different problems with different correlation matrix sizes. The significant performance improvement of the approach over the existing approaches is demonstrated through benchmark examples.
Resumo:
Introduction: The human patellar tendon is highly adaptive to changes in habitual loading but little is known about its acute mechanical response to exercise. This research evaluated the immediate transverse strain response of the patellar tendon to a bout of resistive quadriceps exercise. Methods: Twelve healthy adult males (mean age 34.0+/-12.1 years, height 1.75+/-0.09 m and weight 76.7+/-12.3 kg) free of knee pain participated in the research. A 10-5 MHz linear-array transducer was used to acquire standardised sagittal sonograms of the right patellar tendon immediately prior to and following 90 repetitions of a double-leg parallel-squat exercise performed against a resistance of 175% bodyweight. Tendon thickness was determined 20-mm distal to the pole of the patellar and transverse Hencky strain was calculated as the natural log of the ratio of post- to pre-exercise tendon thickness and expressed as a percentage. Measures of tendon echotexture (echogenicity and entropy) were also calculated from subsequent gray-scale profiles. Results: Quadriceps exercise resulted in an immediate decrease in patellar tendon thickness (P<.05), equating to a transverse strain of -22.5+/-3.4%, and was accompanied by increased tendon echogenicity (P<.05) and decreased entropy (P<.05). The transverse strain response of the patellar tendon was significantly correlated with both tendon echogenicity (r = -0.58, P<.05) and entropy following exercise (r=0.73, P<.05), while older age was associated with greater entropy of the patellar tendon prior to exercise (r=0.79, P<.05) and a reduced transverse strain response (r=0.61, P<.05) following exercise. Conclusions: This study is the first to show that quadriceps exercise invokes structural alignment and fluid movement within the matrix that are manifest by changes in echotexture and transverse strain in the patellar tendon., (C)2012The American College of Sports Medicine
Resumo:
Welcome to the Quality assessment matrix. This matrix is designed for highly qualified discipline experts to evaluate their course, major or unit in a systematic manner. The primary purpose of the Quality assessment matrix is to provide a tool that a group of academic staff at universities can collaboratively review the assessment within a course, major or unit annually. The annual review will result in you being read for an external curricula review at any point in time. This tool is designed for use in a workshop format with one, two or more academic staff, and will lead to an action plan for implementation.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
This article analyzes a series of stories and artworks that were produced in a collective biography workshop. It explores Judith Butler’s concept of the heterosexual matrix combined with a Deleuzian theoretical framework. The article begins with an overview of Butler’s concept of the heterosexual matrix and her theorizations on how it might be disrupted. It then suggests how a Deleuzian framework offers other tools for analyzing these ruptures at the micro level of girls’ everyday interactions.
Resumo:
This document provides data for the case study presented in our recent earthwork planning papers. Some results are also provided in a graphical format using Excel.
Resumo:
Despite its potential multiple contributions to sustainable policy objectives, urban transit is generally not widely used by the public in terms of its market share compared to that of automobiles, particularly in affluent societies with low-density urban forms like Australia. Transit service providers need to attract more people to transit by improving transit quality of service. The key to cost-effective transit service improvements lies in accurate evaluation of policy proposals by taking into account their impacts on transit users. If transit providers knew what is more or less important to their customers, they could focus their efforts on optimising customer-oriented service. Policy interventions could also be specified to influence transit users’ travel decisions, with targets of customer satisfaction and broader community welfare. This significance motivates the research into the relationship between urban transit quality of service and its user perception as well as behaviour. This research focused on two dimensions of transit user’s travel behaviour: route choice and access arrival time choice. The study area chosen was a busy urban transit corridor linking Brisbane central business district (CBD) and the St. Lucia campus of The University of Queensland (UQ). This multi-system corridor provided a ‘natural experiment’ for transit users between the CBD and UQ, as they can choose between busway 109 (with grade-separate exclusive right-of-way), ordinary on-street bus 412, and linear fast ferry CityCat on the Brisbane River. The population of interest was set as the attendees to UQ, who travelled from the CBD or from a suburb via the CBD. Two waves of internet-based self-completion questionnaire surveys were conducted to collect data on sampled passengers’ perception of transit service quality and behaviour of using public transit in the study area. The first wave survey is to collect behaviour and attitude data on respondents’ daily transit usage and their direct rating of importance on factors of route-level transit quality of service. A series of statistical analyses is conducted to examine the relationships between transit users’ travel and personal characteristics and their transit usage characteristics. A factor-cluster segmentation procedure is applied to respodents’ importance ratings on service quality variables regarding transit route preference to explore users’ various perspectives to transit quality of service. Based on the perceptions of service quality collected from the second wave survey, a series of quality criteria of the transit routes under study was quantitatively measured, particularly, the travel time reliability in terms of schedule adherence. It was proved that mixed traffic conditions and peak-period effects can affect transit service reliability. Multinomial logit models of transit user’s route choice were estimated using route-level service quality perceptions collected in the second wave survey. Relative importance of service quality factors were derived from choice model’s significant parameter estimates, such as access and egress times, seat availability, and busway system. Interpretations of the parameter estimates were conducted, particularly the equivalent in-vehicle time of access and egress times, and busway in-vehicle time. Market segmentation by trip origin was applied to investigate the difference in magnitude between the parameter estimates of access and egress times. The significant costs of transfer in transit trips were highlighted. These importance ratios were applied back to quality perceptions collected as RP data to compare the satisfaction levels between the service attributes and to generate an action relevance matrix to prioritise attributes for quality improvement. An empirical study on the relationship between average passenger waiting time and transit service characteristics was performed using the service quality perceived. Passenger arrivals for services with long headways (over 15 minutes) were found to be obviously coordinated with scheduled departure times of transit vehicles in order to reduce waiting time. This drove further investigations and modelling innovations in passenger’ access arrival time choice and its relationships with transit service characteristics and average passenger waiting time. Specifically, original contributions were made in formulation of expected waiting time, analysis of the risk-aversion attitude to missing desired service run in the passengers’ access time arrivals’ choice, and extensions of the utility function specification for modelling passenger access arrival distribution, by using complicated expected utility forms and non-linear probability weighting to explicitly accommodate the risk of missing an intended service and passenger’s risk-aversion attitude. Discussions on this research’s contributions to knowledge, its limitations, and recommendations for future research are provided at the concluding section of this thesis.
Resumo:
Matrix metalloproteinases (MMPs) are proteolytic enzymes important to wound healing. In non-healing wounds, it has been suggested that MMP levels become dysfunctional, hence it is of great interest to develop sensors to detect MMP biomarkers. This study presents the development of a label-free optical MMP biosensor based on a functionalised porous silicon (pSi) thin film. The biosensor is fabricated by immobilising a peptidomimetic MMP inhibitor in the porous layer using hydrosilylation followed by amide coupling. The binding of MMP to the immobilised inhibitor translates into a change of effective optical thickness (EOT) over the time. We investigate the effect of surface functionalisation on the stability of pSi surface and evaluate the sensing performance. We successfully demonstrate MMP detection in buffer solution and human wound fluid at physiologically relevant concentrations. This biosensor may find application as a point-of-care device that is prognostic of the healing trajectory of chronic wounds.
Resumo:
This project addresses the viability of lightweight, low power consumption, flexible, large format LED screens. The investigation encompasses all aspects of the electrical and mechanical design, individually and as a system, and achieves a successful full scale prototype. The prototype implements novel techniques to achieve large displacement colour aliasing, a purely passive thermal management solution, a rapid deployment system, individual seven bit LED current control with two way display communication, auto-configuration and complete signal redundancy, all of which are in direct response to industry needs.