900 resultados para LMIs (Linear Matrix Inequalities)
Resumo:
This study investigated interactions of protein-cleaving enzymes (or proteases) that promote prostate cancer progression. It provides the first evidence of a novel regulatory network of protease activity at the surface of cells. The proteases kallikrein-related peptidases 4 and 14, and matrix metalloproteinases 3 and 9 are cleaved at the cell surface by the cell surface proteases hepsin and TMPRSS2. These cleavage events potentially regulate activation of downstream targets of kallikrein 4 and 14 such as cell surface signalling via the protease-activated receptors (PARs) and cell growth-promoting factors such as hepatocyte-growth factor.
Resumo:
Welcome to the Evaluation of course matrix. This matrix is designed for highly qualified discipline experts to evaluate their course, major or unit in a systemic manner. The primary purpose of the Evaluation of course matrix is to provide a tool that a group of academic staff at universities can collaboratively review the assessment within a course, major or unit annually. The annual review will result in you being ready for an external curricula review at any point in time. This tool is designed for use in a workshop format with one, two or more academic staff, and will lead to an action plan for implementation. I hope you find this tool useful in your assessment review.
Resumo:
Traditional sensitivity and elasticity analyses of matrix population models have been used to inform management decisions, but they ignore the economic costs of manipulating vital rates. For example, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously. These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency. ©2006 Society for Conservation Biology.
Resumo:
Linear assets are engineering infrastructure, such as pipelines, railway lines, and electricity cables, which span long distances and can be divided into different segments. Optimal management of such assets is critical for asset owners as they normally involve significant capital investment. Currently, Time Based Preventive Maintenance (TBPM) strategies are commonly used in industry to improve the reliability of such assets, as they are easy to implement compared with reliability or risk-based preventive maintenance strategies. Linear assets are normally of large scale and thus their preventive maintenance is costly. Their owners and maintainers are always seeking to optimize their TBPM outcomes in terms of minimizing total expected costs over a long term involving multiple maintenance cycles. These costs include repair costs, preventive maintenance costs, and production losses. A TBPM strategy defines when Preventive Maintenance (PM) starts, how frequently the PM is conducted and which segments of a linear asset are operated on in each PM action. A number of factors such as required minimal mission time, customer satisfaction, human resources, and acceptable risk levels need to be considered when planning such a strategy. However, in current practice, TBPM decisions are often made based on decision makers’ expertise or industrial historical practice, and lack a systematic analysis of the effects of these factors. To address this issue, here we investigate the characteristics of TBPM of linear assets, and develop an effective multiple criteria decision making approach for determining an optimal TBPM strategy. We develop a recursive optimization equation which makes it possible to evaluate the effect of different maintenance options for linear assets, such as the best partitioning of the asset into segments and the maintenance cost per segment.
Resumo:
In this paper, a class of unconditionally stable difference schemes based on the Pad´e approximation is presented for the Riesz space-fractional telegraph equation. Firstly, we introduce a new variable to transform the original dfferential equation to an equivalent differential equation system. Then, we apply a second order fractional central difference scheme to discretise the Riesz space-fractional operator. Finally, we use (1, 1), (2, 2) and (3, 3) Pad´e approximations to give a fully discrete difference scheme for the resulting linear system of ordinary differential equations. Matrix analysis is used to show the unconditional stability of the proposed algorithms. Two examples with known exact solutions are chosen to assess the proposed difference schemes. Numerical results demonstrate that these schemes provide accurate and efficient methods for solving a space-fractional hyperbolic equation.
Resumo:
The use of expert knowledge to quantify a Bayesian Network (BN) is necessary when data is not available. This however raises questions regarding how opinions from multiple experts can be used in a BN. Linear pooling is a popular method for combining probability assessments from multiple experts. In particular, Prior Linear Pooling (PrLP), which pools opinions then places them into the BN is a common method. This paper firstly proposes an alternative pooling method, Posterior Linear Pooling (PoLP). This method constructs a BN for each expert, then pools the resulting probabilities at the nodes of interest. Secondly, it investigates the advantages and disadvantages of using these pooling methods to combine the opinions of multiple experts. Finally, the methods are applied to an existing BN, the Wayfinding Bayesian Network Model, to investigate the behaviour of different groups of people and how these different methods may be able to capture such differences. The paper focusses on 6 nodes Human Factors, Environmental Factors, Wayfinding, Communication, Visual Elements of Communication and Navigation Pathway, and three subgroups Gender (female, male),Travel Experience (experienced, inexperienced), and Travel Purpose (business, personal) and finds that different behaviors can indeed be captured by the different methods.
Resumo:
The field of prognostics has attracted significant interest from the research community in recent times. Prognostics enables the prediction of failures in machines resulting in benefits to plant operators such as shorter downtimes, higher operation reliability, reduced operations and maintenance cost, and more effective maintenance and logistics planning. Prognostic systems have been successfully deployed for the monitoring of relatively simple rotating machines. However, machines and associated systems today are increasingly complex. As such, there is an urgent need to develop prognostic techniques for such complex systems operating in the real world. This review paper focuses on prognostic techniques that can be applied to rotating machinery operating under non-linear and non-stationary conditions. The general concept of these techniques, the pros and cons of applying these methods, as well as their applications in the research field are discussed. Finally, the opportunities and challenges in implementing prognostic systems and developing effective techniques for monitoring machines operating under non-stationary and non-linear conditions are also discussed.
Resumo:
The efficient computation of matrix function vector products has become an important area of research in recent times, driven in particular by two important applications: the numerical solution of fractional partial differential equations and the integration of large systems of ordinary differential equations. In this work we consider a problem that combines these two applications, in the form of a numerical solution algorithm for fractional reaction diffusion equations that after spatial discretisation, is advanced in time using the exponential Euler method. We focus on the efficient implementation of the algorithm on Graphics Processing Units (GPU), as we wish to make use of the increased computational power available with this hardware. We compute the matrix function vector products using the contour integration method in [N. Hale, N. Higham, and L. Trefethen. Computing Aα, log(A), and related matrix functions by contour integrals. SIAM J. Numer. Anal., 46(5):2505–2523, 2008]. Multiple levels of preconditioning are applied to reduce the GPU memory footprint and to further accelerate convergence. We also derive an error bound for the convergence of the contour integral method that allows us to pre-determine the appropriate number of quadrature points. Results are presented that demonstrate the effectiveness of the method for large two-dimensional problems, showing a speedup of more than an order of magnitude compared to a CPU-only implementation.
Resumo:
The matrix of volcaniclastic kimberlite (VK) from the Muskox pipe (Northern Slave Province, Nunavut, Canada) is interpreted to represent an overprint of an original clastic matrix. Muskox VK is subdivided into three different matrix mineral assemblages that reflect differences in the proportions of original primary matrix constituents, temperature of formation and nature of the altering fluids. Using whole rock X-ray fluorescence (XRF), whole rock X-ray diffraction (XRD), microprobe analyses, back-scatter electron (BSE) imaging, petrography and core logging, we find that most matrix minerals (serpentine, phlogopite, chlorite, saponite, monticellite, Fe-Ti oxides and calcite) lack either primary igneous or primary clastic textures. The mineralogy and textures are most consistent with formation through alteration overprinting of an original clastic matrix that form by retrograde reactions as the deposit cools, or, in the case of calcite, by precipitation from Ca-bearing fluids into a secondary porosity. The first mineral assemblage consists largely of serpentine, phlogopite, calcite, Fe-Ti oxides and monticellite and occurs in VK with relatively fresh framework clasts. Alteration reactions, driven by deuteric fluids derived from the juvenile constituents, promote the crystallisation of minerals that indicate relatively high temperatures of formation (> 400 °C). Lower-temperature minerals are not present because permeability was occluded before the deposit cooled to low temperatures, thus shielding the facies from further interaction with fluids. The other two matrix mineral assemblages consist largely of serpentine, phlogopite, calcite, +/- diopside, and +/- chlorite. They form in VK that contains more country rock, which may have caused the deposit to be cooler upon emplacement. Most framework components are completely altered, suggesting that larger volumes of fluids drove the alteration reactions. These fluids were likely of meteoric provenance and became heated by the volcaniclastic debris when they percolated into the VK infill. Most alteration reactions ceased at temperatures > 200 °C, as indicated by the absence or paucity of lower-temperature phases in most samples, such as saponite. Recognition that Muskox VK contains an original clastic matrix is a necessary first step for evaluating the textural configuration, which is important for reconstructing the physical processes responsible for the formation of the deposit.
Resumo:
This paper proposes the addition of a weighted median Fisher discriminator (WMFD) projection prior to length-normalised Gaussian probabilistic linear discriminant analysis (GPLDA) modelling in order to compensate the additional session variation. In limited microphone data conditions, a linear-weighted approach is introduced to increase the influence of microphone speech dataset. The linear-weighted WMFD-projected GPLDA system shows improvements in EER and DCF values over the pooled LDA- and WMFD-projected GPLDA systems in inter-view-interview condition as WMFD projection extracts more speaker discriminant information with limited number of sessions/ speaker data, and linear-weighted GPLDA approach estimates reliable model parameters with limited microphone data.
Resumo:
In structural brain MRI, group differences or changes in brain structures can be detected using Tensor-Based Morphometry (TBM). This method consists of two steps: (1) a non-linear registration step, that aligns all of the images to a common template, and (2) a subsequent statistical analysis. The numerous registration methods that have recently been developed differ in their detection sensitivity when used for TBM, and detection power is paramount in epidemological studies or drug trials. We therefore developed a new fluid registration method that computes the mappings and performs statistics on them in a consistent way, providing a bridge between TBM registration and statistics. We used the Log-Euclidean framework to define a new regularizer that is a fluid extension of the Riemannian elasticity, which assures diffeomorphic transformations. This regularizer constrains the symmetrized Jacobian matrix, also called the deformation tensor. We applied our method to an MRI dataset from 40 fraternal and identical twins, to revealed voxelwise measures of average volumetric differences in brain structure for subjects with different degrees of genetic resemblance.
Resumo:
Genetic correlation (rg) analysis determines how much of the correlation between two measures is due to common genetic influences. In an analysis of 4 Tesla diffusion tensor images (DTI) from 531 healthy young adult twins and their siblings, we generalized the concept of genetic correlation to determine common genetic influences on white matter integrity, measured by fractional anisotropy (FA), at all points of the brain, yielding an NxN genetic correlation matrix rg(x,y) between FA values at all pairs of voxels in the brain. With hierarchical clustering, we identified brain regions with relatively homogeneous genetic determinants, to boost the power to identify causal single nucleotide polymorphisms (SNP). We applied genome-wide association (GWA) to assess associations between 529,497 SNPs and FA in clusters defined by hubs of the clustered genetic correlation matrix. We identified a network of genes, with a scale-free topology, that influences white matter integrity over multiple brain regions.
Resumo:
Using a combination of multivariate statistical techniques and the graphical assessment of major ion ratios, the influences on hydrochemical variability of coal seam gas (or coal bed methane) groundwaters from several sites in the Surat and Clarence-Moreton basins in Queensland, Australia, were investigated. Several characteristic relationships between major ions were observed: 1) strong positive linear correlation between the Na/Cl and alkalinity/Cl ratios; 2) an exponentially decaying trend between the Na/Cl and Na/alkalinity ratios; 3) inverse linear relationships between increasing chloride concentrations and decreasing pH for high salinity groundwaters, and; 4) high residual alkalinity for lower salinity waters, and an inverse relationship between decreasing residual alkalinity and increasing chloride concentrations for more saline waters. The interpretation of the hydrochemical data provides invaluable insights into the hydrochemical evolution of coal seam gas (CSG) groundwaters that considers both the source of major ions in coals and the influence of microbial activity. Elevated chloride and sodium concentrations in more saline groundwaters appear to be influenced by organic-bound chlorine held in the coal matrix; a sodium and chloride ion source that has largely been neglected in previous CSG groundwater studies. However, contrastingly high concentrations of bicarbonate in low salinity waters could not be explained, and are possibly associated with a number of different factors such as coal degradation, methanogenic processes, the evolution of high-bicarbonate NaHCO3 water types earlier on in the evolutionary pathway, and variability in gas reservoir characteristics. Using recently published data for CSG groundwaters in different basins, the characteristic major ion relationships identified for new data presented in this study were also observed in other CSG groundwaters from Australia, as well as for those in the Illinois Basin in the USA. This observation suggests that where coal maceral content and the dominant methanogenic pathway are similar, and where organic-bound chlorine is relatively abundant, distinct hydrochemical responses may be observed. Comparisons with published data of other NaHCO3 water types in non-CSG environments suggest that these characteristic major ion relationships described here can: i) serve as an indicator of potential CSG groundwaters in certain coal-bearing aquifers that contain methane; and ii) help in the development of strategic sampling programmes for CSG exploration and to monitor potential impacts of CSG activities on groundwater resources.
Resumo:
Diets low in fruits, vegetables, and whole grains, and high in saturated fat, salt, and sugar are the major contributors to the burden of chronic diseases globally. Previous research, and studies in this issue of Public Health Nutrition (PHN), show that unhealthy diets are more commonly observed among socioeconomically disadvantaged groups, and are key contributors to their higher rates of chronic disease. Most research examining socioeconomic inequalities in diet and bodyweight has been descriptive, and has focused on identifying the nature, extent, and direction of the inequalities. These types of studies are clearly necessary and important. We need however to move beyond description of the problem and focus much more on the question of why inequalities in diet and bodyweight exist. Furthering our understanding of this question will provide the necessary evidence-base to develop effective interventions to reduce the inequalities. The challenge of tackling dietary inequalities however doesn’t finish here: a maximally effective approach will also require equity-based policies that address the unequal population-distribution of social and economic resources, which is the fundamental root-cause of dietary and bodyweight inequalities.