22 resultados para Matrix analytic methods,
em CentAUR: Central Archive University of Reading - UK
Resumo:
Dominant paradigms of causal explanation for why and how Western liberal-democracies go to war in the post-Cold War era remain versions of the 'liberal peace' or 'democratic peace' thesis. Yet such explanations have been shown to rest upon deeply problematic epistemological and methodological assumptions. Of equal importance, however, is the failure of these dominant paradigms to account for the 'neoliberal revolution' that has gripped Western liberal-democracies since the 1970s. The transition from liberalism to neoliberalism remains neglected in analyses of the contemporary Western security constellation. Arguing that neoliberalism can be understood simultaneously through the Marxian concept of ideology and the Foucauldian concept of governmentality – that is, as a complementary set of 'ways of seeing' and 'ways of being' – the thesis goes on to analyse British security in policy and practice, considering it as an instantiation of a wider neoliberal way of war. In so doing, the thesis draws upon, but also challenges and develops, established critical discourse analytic methods, incorporating within its purview not only the textual data that is usually considered by discourse analysts, but also material practices of security. This analysis finds that contemporary British security policy is predicated on a neoliberal social ontology, morphology and morality – an ideology or 'way of seeing' – focused on the notion of a globalised 'network-market', and is aimed at rendering circulations through this network-market amenable to neoliberal techniques of government. It is further argued that security practices shaped by this ideology imperfectly and unevenly achieve the realisation of neoliberal 'ways of being' – especially modes of governing self and other or the 'conduct of conduct' – and the re-articulation of subjectivities in line with neoliberal principles of individualism, risk, responsibility and flexibility. The policy and practice of contemporary British 'security' is thus recontextualised as a component of a broader 'neoliberal way of war'.
Resumo:
This paper extends the singular value decomposition to a path of matricesE(t). An analytic singular value decomposition of a path of matricesE(t) is an analytic path of factorizationsE(t)=X(t)S(t)Y(t) T whereX(t) andY(t) are orthogonal andS(t) is diagonal. To maintain differentiability the diagonal entries ofS(t) are allowed to be either positive or negative and to appear in any order. This paper investigates existence and uniqueness of analytic SVD's and develops an algorithm for computing them. We show that a real analytic pathE(t) always admits a real analytic SVD, a full-rank, smooth pathE(t) with distinct singular values admits a smooth SVD. We derive a differential equation for the left factor, develop Euler-like and extrapolated Euler-like numerical methods for approximating an analytic SVD and prove that the Euler-like method converges.
Resumo:
Many scientific and engineering applications involve inverting large matrices or solving systems of linear algebraic equations. Solving these problems with proven algorithms for direct methods can take very long to compute, as they depend on the size of the matrix. The computational complexity of the stochastic Monte Carlo methods depends only on the number of chains and the length of those chains. The computing power needed by inherently parallel Monte Carlo methods can be satisfied very efficiently by distributed computing technologies such as Grid computing. In this paper we show how a load balanced Monte Carlo method for computing the inverse of a dense matrix can be constructed, show how the method can be implemented on the Grid, and demonstrate how efficiently the method scales on multiple processors. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The goal of the review is to provide a state-of-the-art survey on sampling and probe methods for the solution of inverse problems. Further, a configuration approach to some of the problems will be presented. We study the concepts and analytical results for several recent sampling and probe methods. We will give an introduction to the basic idea behind each method using a simple model problem and then provide some general formulation in terms of particular configurations to study the range of the arguments which are used to set up the method. This provides a novel way to present the algorithms and the analytic arguments for their investigation in a variety of different settings. In detail we investigate the probe method (Ikehata), linear sampling method (Colton-Kirsch) and the factorization method (Kirsch), singular sources Method (Potthast), no response test (Luke-Potthast), range test (Kusiak, Potthast and Sylvester) and the enclosure method (Ikehata) for the solution of inverse acoustic and electromagnetic scattering problems. The main ideas, approaches and convergence results of the methods are presented. For each method, we provide a historical survey about applications to different situations.
Resumo:
Capillary electrophoresis (CE) offers the analyst a number of key advantages for the analysis of the components of foods. CE offers better resolution than, say, high-performance liquid chromatography (HPLC), and is more adept at the simultaneous separation of a number of components of different chemistries within a single matrix. In addition, CE requires less rigorous sample cleanup procedures than HPLC, while offering the same degree of automation. However, despite these advantages, CE remains under-utilized by food analysts. Therefore, this review consolidates and discusses the currently reported applications of CE that are relevant to the analysis of foods. Some discussion is also devoted to the development of these reported methods and to the advantages/disadvantages compared with the more usual methods for each particular analysis. It is the aim of this review to give practicing food analysts an overview of the current scope of CE.
Resumo:
The purpose of this paper is to present two multi-criteria decision-making models, including an Analytic Hierarchy Process (AHP) model and an Analytic Network Process (ANP) model for the assessment of deconstruction plans and to make a comparison between the two models with an experimental case study. Deconstruction planning is under pressure to reduce operation costs, adverse environmental impacts and duration, in the meanwhile to improve productivity and safety in accordance with structure characteristics, site conditions and past experiences. To achieve these targets in deconstruction projects, there is an impending need to develop a formal procedure for contractors to select a most appropriate deconstruction plan. Because numbers of factors influence the selection of deconstruction techniques, engineers definitely need effective tools to conduct the selection process. In this regard, multi-criteria decision-making methods such as AHP have been adopted to effectively support deconstruction technique selection in previous researches. in which it has been proved that AHP method can help decision-makers to make informed decisions on deconstruction technique selection based on a sound technical framework. In this paper, the authors present the application and comparison of two decision-making models including the AHP model and the ANP model for deconstruction plan assessment. The paper concludes that both AHP and ANP are viable and capable tools for deconstruction plan assessment under the same set of evaluation criteria. However, although the ANP can measure relationship among selection criteria and their sub-criteria, which is normally ignored in the AHP, the authors also indicate that whether the ANP model can provide a more accurate result should be examined in further research.
Resumo:
This paper identifies the indicators of energy efficiency assessment in residential building in China through a wide literature review. Indicators are derived from three main sources: 1) The existing building assessment methods; 2)The existing Chinese standards and technology codes in building energy efficiency; 3)Academia research. As a result, we proposed an indicator list by refining the indicators in the above sources. Identified indicators are weighted by the group analytic hierarchy process (AHP) method. Group AHP method is implemented following key steps: Step 1: Experienced experts are selected to form a group; Step 2: A survey is implemented to collect the individual judgments on the importance of indicators in the group; Step 3: Members’ judgments are synthesized to the group judgments; Step 4: Indicators are weighted by AHP on the group judgments; Step 5: Investigation of consistency estimation shows that the consistency of the judgment matrix is accepted. We believe that the weighted indicators in this paper will provide important references to building energy efficiency assessment.
Resumo:
In this paper we consider hybrid (fast stochastic approximation and deterministic refinement) algorithms for Matrix Inversion (MI) and Solving Systems of Linear Equations (SLAE). Monte Carlo methods are used for the stochastic approximation, since it is known that they are very efficient in finding a quick rough approximation of the element or a row of the inverse matrix or finding a component of the solution vector. We show how the stochastic approximation of the MI can be combined with a deterministic refinement procedure to obtain MI with the required precision and further solve the SLAE using MI. We employ a splitting A = D – C of a given non-singular matrix A, where D is a diagonal dominant matrix and matrix C is a diagonal matrix. In our algorithm for solving SLAE and MI different choices of D can be considered in order to control the norm of matrix T = D –1C, of the resulting SLAE and to minimize the number of the Markov Chains required to reach given precision. Further we run the algorithms on a mini-Grid and investigate their efficiency depending on the granularity. Corresponding experimental results are presented.
Resumo:
Objectives: Our objective was to test the performance of CA125 in classifying serum samples from a cohort of malignant and benign ovarian cancers and age-matched healthy controls and to assess whether combining information from matrix-assisted laser desorption/ionization (MALDI) time-of-flight profiling could improve diagnostic performance. Materials and Methods: Serum samples from women with ovarian neoplasms and healthy volunteers were subjected to CA125 assay and MALDI time-of-flight mass spectrometry (MS) profiling. Models were built from training data sets using discriminatory MALDI MS peaks in combination with CA125 values and tested their ability to classify blinded test samples. These were compared with models using CA125 threshold levels from 193 patients with ovarian cancer, 290 with benign neoplasm, and 2236 postmenopausal healthy controls. Results: Using a CA125 cutoff of 30 U/mL, an overall sensitivity of 94.8% (96.6% specificity) was obtained when comparing malignancies versus healthy postmenopausal controls, whereas a cutoff of 65 U/mL provided a sensitivity of 83.9% (99.6% specificity). High classification accuracies were obtained for early-stage cancers (93.5% sensitivity). Reasons for high accuracies include recruitment bias, restriction to postmenopausal women, and inclusion of only primary invasive epithelial ovarian cancer cases. The combination of MS profiling information with CA125 did not significantly improve the specificity/accuracy compared with classifications on the basis of CA125 alone. Conclusions: We report unexpectedly good performance of serum CA125 using threshold classification in discriminating healthy controls and women with benign masses from those with invasive ovarian cancer. This highlights the dependence of diagnostic tests on the characteristics of the study population and the crucial need for authors to provide sufficient relevant details to allow comparison. Our study also shows that MS profiling information adds little to diagnostic accuracy. This finding is in contrast with other reports and shows the limitations of serum MS profiling for biomarker discovery and as a diagnostic tool
Resumo:
Modelling the interaction of terahertz(THz) radiation with biological tissueposes many interesting problems. THzradiation is neither obviously described byan electric field distribution or anensemble of photons and biological tissueis an inhomogeneous medium with anelectronic permittivity that is bothspatially and frequency dependent making ita complex system to model.A three-layer system of parallel-sidedslabs has been used as the system throughwhich the passage of THz radiation has beensimulated. Two modelling approaches havebeen developed a thin film matrix model anda Monte Carlo model. The source data foreach of these methods, taken at the sametime as the data recorded to experimentallyverify them, was a THz spectrum that hadpassed though air only.Experimental verification of these twomodels was carried out using athree-layered in vitro phantom. Simulatedtransmission spectrum data was compared toexperimental transmission spectrum datafirst to determine and then to compare theaccuracy of the two methods. Goodagreement was found, with typical resultshaving a correlation coefficient of 0.90for the thin film matrix model and 0.78 forthe Monte Carlo model over the full THzspectrum. Further work is underway toimprove the models above 1 THz.
Resumo:
This paper introduces a method for simulating multivariate samples that have exact means, covariances, skewness and kurtosis. We introduce a new class of rectangular orthogonal matrix which is fundamental to the methodology and we call these matrices L matrices. They may be deterministic, parametric or data specific in nature. The target moments determine the L matrix then infinitely many random samples with the same exact moments may be generated by multiplying the L matrix by arbitrary random orthogonal matrices. This methodology is thus termed “ROM simulation”. Considering certain elementary types of random orthogonal matrices we demonstrate that they generate samples with different characteristics. ROM simulation has applications to many problems that are resolved using standard Monte Carlo methods. But no parametric assumptions are required (unless parametric L matrices are used) so there is no sampling error caused by the discrete approximation of a continuous distribution, which is a major source of error in standard Monte Carlo simulations. For illustration, we apply ROM simulation to determine the value-at-risk of a stock portfolio.
Resumo:
Feedback design for a second-order control system leads to an eigenstructure assignment problem for a quadratic matrix polynomial. It is desirable that the feedback controller not only assigns specified eigenvalues to the second-order closed loop system but also that the system is robust, or insensitive to perturbations. We derive here new sensitivity measures, or condition numbers, for the eigenvalues of the quadratic matrix polynomial and define a measure of the robustness of the corresponding system. We then show that the robustness of the quadratic inverse eigenvalue problem can be achieved by solving a generalized linear eigenvalue assignment problem subject to structured perturbations. Numerically reliable methods for solving the structured generalized linear problem are developed that take advantage of the special properties of the system in order to minimize the computational work required. In this part of the work we treat the case where the leading coefficient matrix in the quadratic polynomial is nonsingular, which ensures that the polynomial is regular. In a second part, we will examine the case where the open loop matrix polynomial is not necessarily regular.