375 resultados para Linear matrix inequalities
Resumo:
High resolution transmission electron microscopy of the Mighei carbonaceous chondrite matrix has revealed the presence of a new mixed layer structure material. This mixed-layer material consists of an ordered arrangement of serpentine-type (S) and brucite-type (B) layers in the sequence ... SBBSBB. ... Electron diffraction and imaging techniques show that the basal periodicity is ~ 17 Å. Discrete crystals of SBB-type material are typically curved, of small size (<1 μm) and show structural variations similar to the serpentine group minerals. Mixed-layer material also occurs in association with planar serpentine. Characteristics of SBB-type material are not consistent with known terrestrial mixed-layer clay minerals. Evidence for formation by a condensation event or by subsequent alteration of preexisting material is not yet apparent. © 1982.
Resumo:
CARBONACEOUS chondrites provide valuable information as they are the least altered examples of early Solar System material1. The matrix constitutes a major proportion of carbonaceous chondrites. Despite many past attempts, unambiguous identification of the minerals in the matrix has not been totally successful2. This is mainly due to the extremely fine-grained nature of the matrix phases. Recently, progress in the characterisation of these phases has been made by electron diffraction studies3,4. We present here the direct observation, by high resolution imaging, of phases in carbonaceous chondrite matrices. We used ion-thinned sections from the Murchison C2(M) meteorite for transmission electron microscopy. The Murchison matrix contains both ordered and disordered inter-growths of serpentine-like and brucite-like layers. Such mixed-layer structures are new types of layer silicates. © 1979 Nature Publishing Group.
Resumo:
This chapter represents the analytical solution of two-dimensional linear stretching sheet problem involving a non-Newtonian liquid and suction by (a) invoking the boundary layer approximation and (b) using this result to solve the stretching sheet problem without using boundary layer approximation. The basic boundary layer equations for momentum, which are non-linear partial differential equations, are converted into non-linear ordinary differential equations by means of similarity transformation. The results reveal a new analytical procedure for solving the boundary layer equations arising in a linear stretching sheet problem involving a non-Newtonian liquid (Walters’ liquid B). The present study throws light on the analytical solution of a class of boundary layer equations arising in the stretching sheet problem.
Resumo:
In recent years considerable attention has been paid to the numerical solution of stochastic ordinary differential equations (SODEs), as SODEs are often more appropriate than their deterministic counterparts in many modelling situations. However, unlike the deterministic case numerical methods for SODEs are considerably less sophisticated due to the difficulty in representing the (possibly large number of) random variable approximations to the stochastic integrals. Although Burrage and Burrage [High strong order explicit Runge-Kutta methods for stochastic ordinary differential equations, Applied Numerical Mathematics 22 (1996) 81-101] were able to construct strong local order 1.5 stochastic Runge-Kutta methods for certain cases, it is known that all extant stochastic Runge-Kutta methods suffer an order reduction down to strong order 0.5 if there is non-commutativity between the functions associated with the multiple Wiener processes. This order reduction down to that of the Euler-Maruyama method imposes severe difficulties in obtaining meaningful solutions in a reasonable time frame and this paper attempts to circumvent these difficulties by some new techniques. An additional difficulty in solving SODEs arises even in the Linear case since it is not possible to write the solution analytically in terms of matrix exponentials unless there is a commutativity property between the functions associated with the multiple Wiener processes. Thus in this present paper first the work of Magnus [On the exponential solution of differential equations for a linear operator, Communications on Pure and Applied Mathematics 7 (1954) 649-673] (applied to deterministic non-commutative Linear problems) will be applied to non-commutative linear SODEs and methods of strong order 1.5 for arbitrary, linear, non-commutative SODE systems will be constructed - hence giving an accurate approximation to the general linear problem. Secondly, for general nonlinear non-commutative systems with an arbitrary number (d) of Wiener processes it is shown that strong local order I Runge-Kutta methods with d + 1 stages can be constructed by evaluated a set of Lie brackets as well as the standard function evaluations. A method is then constructed which can be efficiently implemented in a parallel environment for this arbitrary number of Wiener processes. Finally some numerical results are presented which illustrate the efficacy of these approaches. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
Introduction and aims: Individual smokers from disadvantaged backgrounds are less likely to quit, which contributes to widening inequalities in smoking. Residents of disadvantaged neighbourhoods are more likely to smoke, and neighbourhood inequalities in smoking may also be widening because of neighbourhood differences in rates of cessation. This study examined the association between neighbourhood disadvantage and smoking cessation and its relationship with neighbourhood inequalities in smoking. Design and methods: A multilevel longitudinal study of mid-aged (40-67 years) residents (n=6915) of Brisbane, Australia, who lived in the same neighbourhoods (n=200) in 2007 and 2009. Neighbourhood inequalities in cessation and smoking were analysed using multilevel logistic regression and Markov chain Monte Carlo simulation. Results: After adjustment for individual-level socioeconomic factors, the probability of quitting smoking between 2007 and 2009 was lower for residents of disadvantaged neighbourhoods (9.0%-12.8%) than their counterparts in more advantaged neighbourhoods (20.7%-22.5%). These inequalities in cessation manifested in widening inequalities in smoking: in 2007 the between-neighbourhood variance in rates of smoking was 0.242 (p≤0.001) and in 2009 it was 0.260 (p≤0.001). In 2007, residents of the most disadvantaged neighbourhoods were 88% (OR 1.88, 95% CrI 1.41-2.49) more likely to smoke than residents in the least disadvantaged neighbourhoods: the corresponding difference in 2009 was 98% (OR 1.98 95% CrI 1.48-2.66). Conclusion: Fundamentally, social and economic inequalities at the neighbourhood and individual-levels cause smoking and cessation inequalities. Reducing these inequalities will require comprehensive, well-funded, and targeted tobacco control efforts and equity based policies that address the social and economic determinants of smoking.
Resumo:
Background The mechanisms underlying socioeconomic inequalities in mortality from cardiovascular diseases (CVD) are largely unknown. We studied the contribution of childhood socioeconomic conditions and adulthood risk factors to inequalities in CVD mortality in adulthood. Methods The prospective GLOBE study was carried out in the Netherlands, with baseline data from 1991, and linked with the cause of death register in 2007. At baseline, participants reported on adulthood socioeconomic position (SEP) (own educational level), childhood socioeconomic conditions (occupational level of respondent’s father), and a broad range of adulthood risk factors (health behaviours, material circumstances, psychosocial factors). This present study is based on 5,395 men and 6,306 women, and the data were analysed using Cox regression models and hazard ratios (HR). Results A low adulthood SEP was associated with increased CVD mortality for men (HR 1.84; 95% CI: 1.41-2.39) and women (HR 1.80; 95%CI: 1.04-3.10). Those with poorer childhood socioeconomic conditions were more likely to die from CVD in adulthood, but this reached statistical significance only among men with the poorest childhood socioeconomic circumstances. About half of the investigated adulthood risk factors showed significant associations with CVD mortality among both men and women, namely renting a house, experiencing financial problems, smoking, physical activity and marital status. Alcohol consumption and BMI showed a U-shaped relationship with CVD mortality among women, with the risk being significantly greater for both abstainers and heavy drinkers, and among women who were underweight or obese. Among men, being single or divorced and using sleep/anxiety drugs increased the risk of CVD mortality. In explanatory models, the largest contributor to adulthood CVD inequalities were material conditions for men (42%; 95% CI: −73 to −20) and behavioural factors for women (55%; 95% CI: -191 to −28). Simultaneous adjustment for adulthood risk factors and childhood socioeconomic conditions attenuated the HR for the lowest adulthood SEP to 1.34 (95% CI: 0.99-1.82) for men and 1.19 (95% CI: 0.65-2.15) for women. Conclusions Adulthood material, behavioural and psychosocial factors played a major role in the explanation of adulthood SEP inequalities in CVD mortality. Childhood socioeconomic circumstances made a modest contribution, mainly via their association with adulthood risk factors. Policies and interventions to reduce health inequalities are likely to be most effective when considering the influence of socioeconomic circumstances across the entire life course and in particular, poor material conditions and unhealthy behaviours in adulthood.
Resumo:
With the overwhelming increase in the amount of texts on the web, it is almost impossible for people to keep abreast of up-to-date information. Text mining is a process by which interesting information is derived from text through the discovery of patterns and trends. Text mining algorithms are used to guarantee the quality of extracted knowledge. However, the extracted patterns using text or data mining algorithms or methods leads to noisy patterns and inconsistency. Thus, different challenges arise, such as the question of how to understand these patterns, whether the model that has been used is suitable, and if all the patterns that have been extracted are relevant. Furthermore, the research raises the question of how to give a correct weight to the extracted knowledge. To address these issues, this paper presents a text post-processing method, which uses a pattern co-occurrence matrix to find the relation between extracted patterns in order to reduce noisy patterns. The main objective of this paper is not only reducing the number of closed sequential patterns, but also improving the performance of pattern mining as well. The experimental results on Reuters Corpus Volume 1 data collection and TREC filtering topics show that the proposed method is promising.
Resumo:
To recognize faces in video, face appearances have been widely modeled as piece-wise local linear models which linearly approximate the smooth yet non-linear low dimensional face appearance manifolds. The choice of representations of the local models is crucial. Most of the existing methods learn each local model individually meaning that they only anticipate variations within each class. In this work, we propose to represent local models as Gaussian distributions which are learned simultaneously using the heteroscedastic probabilistic linear discriminant analysis (PLDA). Each gallery video is therefore represented as a collection of such distributions. With the PLDA, not only the within-class variations are estimated during the training, the separability between classes is also maximized leading to an improved discrimination. The heteroscedastic PLDA itself is adapted from the standard PLDA to approximate face appearance manifolds more accurately. Instead of assuming a single global within-class covariance, the heteroscedastic PLDA learns different within-class covariances specific to each local model. In the recognition phase, a probe video is matched against gallery samples through the fusion of point-to-model distances. Experiments on the Honda and MoBo datasets have shown the merit of the proposed method which achieves better performance than the state-of-the-art technique.
Resumo:
We test competing linear and curvilinear predictions between board diversity and performance. The predictions were tested using archival data on 288 organizations listed on the Australian Securities Exchange. The findings provide additional evidence on the business case for board gender diversity and refine the business case for board age diversity.
Resumo:
The generation of a correlation matrix from a large set of long gene sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. The generation is not only computationally intensive but also requires significant memory resources as, typically, few gene sequences can be simultaneously stored in primary memory. The standard practice in such computation is to use frequent input/output (I/O) operations. Therefore, minimizing the number of these operations will yield much faster run-times. This paper develops an approach for the faster and scalable computing of large-size correlation matrices through the full use of available memory and a reduced number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different problems with different correlation matrix sizes. The significant performance improvement of the approach over the existing approaches is demonstrated through benchmark examples.
Resumo:
Introduction: The human patellar tendon is highly adaptive to changes in habitual loading but little is known about its acute mechanical response to exercise. This research evaluated the immediate transverse strain response of the patellar tendon to a bout of resistive quadriceps exercise. Methods: Twelve healthy adult males (mean age 34.0+/-12.1 years, height 1.75+/-0.09 m and weight 76.7+/-12.3 kg) free of knee pain participated in the research. A 10-5 MHz linear-array transducer was used to acquire standardised sagittal sonograms of the right patellar tendon immediately prior to and following 90 repetitions of a double-leg parallel-squat exercise performed against a resistance of 175% bodyweight. Tendon thickness was determined 20-mm distal to the pole of the patellar and transverse Hencky strain was calculated as the natural log of the ratio of post- to pre-exercise tendon thickness and expressed as a percentage. Measures of tendon echotexture (echogenicity and entropy) were also calculated from subsequent gray-scale profiles. Results: Quadriceps exercise resulted in an immediate decrease in patellar tendon thickness (P<.05), equating to a transverse strain of -22.5+/-3.4%, and was accompanied by increased tendon echogenicity (P<.05) and decreased entropy (P<.05). The transverse strain response of the patellar tendon was significantly correlated with both tendon echogenicity (r = -0.58, P<.05) and entropy following exercise (r=0.73, P<.05), while older age was associated with greater entropy of the patellar tendon prior to exercise (r=0.79, P<.05) and a reduced transverse strain response (r=0.61, P<.05) following exercise. Conclusions: This study is the first to show that quadriceps exercise invokes structural alignment and fluid movement within the matrix that are manifest by changes in echotexture and transverse strain in the patellar tendon., (C)2012The American College of Sports Medicine
Resumo:
Welcome to the Quality assessment matrix. This matrix is designed for highly qualified discipline experts to evaluate their course, major or unit in a systematic manner. The primary purpose of the Quality assessment matrix is to provide a tool that a group of academic staff at universities can collaboratively review the assessment within a course, major or unit annually. The annual review will result in you being read for an external curricula review at any point in time. This tool is designed for use in a workshop format with one, two or more academic staff, and will lead to an action plan for implementation.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
This article analyzes a series of stories and artworks that were produced in a collective biography workshop. It explores Judith Butler’s concept of the heterosexual matrix combined with a Deleuzian theoretical framework. The article begins with an overview of Butler’s concept of the heterosexual matrix and her theorizations on how it might be disrupted. It then suggests how a Deleuzian framework offers other tools for analyzing these ruptures at the micro level of girls’ everyday interactions.
Resumo:
This document provides data for the case study presented in our recent earthwork planning papers. Some results are also provided in a graphical format using Excel.