933 resultados para Covariance matrices
Resumo:
Résumé : Les performances de détecteurs à scintillation, composés d’un cristal scintillateur couplé à un photodétecteur, dépendent de façon critique de l’efficacité de la collecte et de l’extraction des photons de scintillation du cristal vers le capteur. Dans les systèmes d’imagerie hautement pixellisés (e.g. TEP, TDM), les scintillateurs doivent être arrangés en matrices compactes avec des facteurs de forme défavorables pour le transport des photons, au détriment des performances du détecteur. Le but du projet est d’optimiser les performances de ces détecteurs pixels par l'identification des sources de pertes de lumière liées aux caractéristiques spectrales, spatiales et angulaires des photons de scintillation incidents sur les faces des scintillateurs. De telles informations acquises par simulation Monte Carlo permettent une pondération adéquate pour l'évaluation de gains atteignables par des méthodes de structuration du scintillateur visant à une extraction de lumière améliorée vers le photodétecteur. Un plan factoriel a permis d'évaluer la magnitude de paramètres affectant la collecte de lumière, notamment l'absorption des matériaux adhésifs assurant l'intégrité matricielle des cristaux ainsi que la performance optique de réflecteurs, tous deux ayant un impact considérable sur le rendement lumineux. D'ailleurs, un réflecteur abondamment utilisé en raison de ses performances optiques exceptionnelles a été caractérisé dans des conditions davantage réalistes par rapport à une immersion dans l'air, où sa réflectivité est toujours rapportée. Une importante perte de réflectivité lorsqu'il est inséré au sein de matrices de scintillateurs a été mise en évidence par simulations puis confirmée expérimentalement. Ceci explique donc les hauts taux de diaphonie observés en plus d'ouvrir la voie à des méthodes d'assemblage en matrices limitant ou tirant profit, selon les applications, de cette transparence insoupçonnée.
Resumo:
The deep-sea lantern shark Etmopterus spinax occurs in the northeast Atlantic on or near the bottoms of the outer continental shelves and slopes, and is regularly captured as bycatch in deep-water commercial fisheries. Given the lack of knowledge on the impacts of fisheries on this species, a demographic analysis using age-based Leslie matrices was carried out. Given the uncertainties in the mortality estimates and in the available life history parameters, several different scenarios, some incorporating stochasticity in the life history parameters (using Monte Carlo simulation), were analyzed. If only natural mortality were considered, even after introducing uncertainties in all parameters, the estimated population growth rate (A) suggested an increasing population. However, if fishing mortality from trawl fisheries is considered, the estimates of A either indicated increasing or declining populations. In these latter cases, the uncertainties in the species reproductive cycle seemed to be particularly relevant, as a 2-year reproductive cycle indicated a stable population, while a longer (3-year cycle) indicated a declining population. The estimated matrix elasticities were in general higher for the survivorship parameters of the younger age classes and tended to decrease for the older ages. This highlights the susceptibility of this deep-sea squaloid to increasing fishing mortality, emphasizing that even though this is a small-sized species, it shows population dynamics patterns more typical of the larger-sized and in general more vulnerable species. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
We study a totally discontinuous interval map defined in [0,1] which is associated to a deformation of the shift map on two symbols 0−1. We define a sequence of transition matrices which characterizes the effect of the interval map on a family of partitions of the interval [0,1]. Recursive algorithms that build the sequence of matrices and their left and right eigenvectors are deduced. Moreover, we compute the Artin zeta function for the interval map.
Resumo:
Linear algebra provides theory and technology that are the cornerstones of a range of cutting edge mathematical applications, from designing computer games to complex industrial problems, as well as more traditional applications in statistics and mathematical modelling. Once past introductions to matrices and vectors, the challenges of balancing theory, applications and computational work across mathematical and statistical topics and problems are considerable, particularly given the diversity of abilities and interests in typical cohorts. This paper considers two such cohorts in a second level linear algebra course in different years. The course objectives and materials were almost the same, but some changes were made in the assessment package. In addition to considering effects of these changes, the links with achievement in first year courses are analysed, together with achievement in a following computational mathematics course. Some results that may initially appear surprising provide insight into the components of student learning in linear algebra.
Resumo:
Despite the best intentions of service providers and organisations, service delivery is rarely error-free. While numerous studies have investigated specific cognitive, emotional or behavioural responses to service failure and recovery, these studies do not fully capture the complexity of the services encounter. Consequently, this research develops a more holistic understanding of how specific service recovery strategies affect the responses of customers by combining two existing models—Smith & Bolton’s (2002) model of emotional responses to service performance and Fullerton and Punj’s (1993) structural model of aberrant consumer behaviour—into a conceptual framework. Specific service recovery strategies are proposed to influence consumer cognition, emotion and behaviour. This research was conducted using a 2x2 between-subjects quasi-experimental design that was administered via written survey. The experimental design manipulated two levels of two specific service recovery strategies: compensation and apology. The effect of the four recovery strategies were investigated by collecting data from 18-25 year olds and were analysed using multivariate analysis of covariance and multiple regression analysis. The results suggest that different service recovery strategies are associated with varying scores of satisfaction, perceived distributive justice, positive emotions, negative emotions and negative functional behaviour, but not dysfunctional behaviour. These finding have significant implications for the theory and practice of managing service recovery.
Resumo:
Purpose Waiting for service by customers is an important problem for many financial service marketers. Two new approaches are proposed. First, customer evaluation of the service is increased with an ambient scent. Second a cognitive variable is identified which different iates customers by the way they value time so that they can be segmented. Methodology Pretests included focus groups which highlighted financial services and a pilot test were foll owed by a main sample of 607 subjects. Structural equation modelling and multivariate analysis of covariance were used for analysis. Findings A cognitive variable, the need for time management can be used, together with demographic and customer net worth data, to segment a customer base. Two environmental interventions, music and scent, can increase customer satisfaction among customers kept waiting in a line. Research implications Two original approaches to a rapidly growing service marketing problem are identified. Practical implications Service contact points can reduce incidence of "queue rage" and enhance customer satisfaction by either or both of two simple modifications to the service environment or a preventive strategy of offering targeted customers an alternative. Originality A new method of segmentation and a new environmental intervention are proposed .
Resumo:
This paper proposes a new approach for delay-dependent robust H-infinity stability analysis and control synthesis of uncertain systems with time-varying delay. The key features of the approach include the introduction of a new Lyapunov–Krasovskii functional, the construction of an augmented matrix with uncorrelated terms, and the employment of a tighter bounding technique. As a result, significant performance improvement is achieved in system analysis and synthesis without using either free weighting matrices or model transformation. Examples are given to demonstrate the effectiveness of the proposed approach.
Resumo:
The molecular and metal profile fingerprints were obtained from a complex substance, Atractylis chinensis DC—a traditional Chinese medicine (TCM), with the use of the high performance liquid chromatography (HPLC) and inductively coupled plasma atomic emission spectroscopy (ICP-AES) techniques. This substance was used in this work as an example of a complex biological material, which has found application as a TCM. Such TCM samples are traditionally processed by the Bran, Cut, Fried and Swill methods, and were collected from five provinces in China. The data matrices obtained from the two types of analysis produced two principal component biplots, which showed that the HPLC fingerprint data were discriminated on the basis of the methods for processing the raw TCM, while the metal analysis grouped according to the geographical origin. When the two data matrices were combined into a one two-way matrix, the resulting biplot showed a clear separation on the basis of the HPLC fingerprints. Importantly, within each different grouping the objects separated according to their geographical origin, and they ranked approximately in the same order in each group. This result suggested that by using such an approach, it is possible to derive improved characterisation of the complex TCM materials on the basis of the two kinds of analytical data. In addition, two supervised pattern recognition methods, K-nearest neighbors (KNNs) method, and linear discriminant analysis (LDA), were successfully applied to the individual data matrices—thus, supporting the PCA approach.
Resumo:
Interactions between small molecules with biopolymers e.g. the bovine serum albumin (BSA protein), are important, and significant information is recorded in the UV–vis and fluorescence spectra of their reaction mixtures. The extraction of this information is difficult conventionally and principally because there is significant overlapping of the spectra of the three analytes in the mixture. The interaction of berberine chloride (BC) and the BSA protein provides an interesting example of such complex systems. UV–vis and fluorescence spectra of BC and BSA mixtures were investigated in pH 7.4 Tris–HCl buffer at 37 °C. Two sample series were measured by each technique: (1) [BSA] was kept constant and the [BC] was varied and (2) [BC] was kept constant and the [BSA] was varied. This produced four spectral data matrices, which were combined into one expanded spectral matrix. This was processed by the multivariate curve resolution–alternating least squares method (MCR–ALS). The results produced: (1) the extracted pure BC, BSA and the BC–BSA complex spectra from the measured heavily overlapping composite responses, (2) the concentration profiles of BC, BSA and the BC–BSA complex, which are difficult to obtain by conventional means, and (3) estimates of the number of binding sites of BC.
Resumo:
Human-specific Bacteroides HF183 (HS-HF183), human-specific Enterococci faecium esp (HS-esp), human-specific adenoviruses (HS-AVs) and human-specific polyomaviruses (HS-PVs) assays were evaluated in freshwater, seawater and distilled water to detect fresh sewage. The sewage spiked water samples were also tested for the concentrations of traditional fecal indicators (i.e., Escherichia coli, enterococci and Clostridium perfringens) and enteric viruses such as enteroviruses (EVs), sapoviruses (SVs), and torquetenoviruses (TVs). The overall host-specificity of the HS-HF183 marker to differentiate between humans and other animals was 98%. However, the HS-esp, HS-AVs and HS-PVs showed 100% hostspecificity. All the human-specific markers showed >97% sensitivity to detect human fecal pollution. E. coli, enterococci and, C. perfringens were detected up to dilutions of sewage 10_5, 10_4 and 10_3 respectively.HS-esp, HS-AVs, HS-PVs, SVs and TVs were detected up to dilution of sewage 10_4 whilst EVs were detected up to dilution 10_5. The ability of the HS-HF183 marker to detect freshsewagewas3–4 orders ofmagnitude higher than that of the HS-esp and viral markers. The ability to detect fresh sewage in freshwater, seawater and distilled water matrices was similar for human-specific bacterial and viral marker. Based on our data, it appears that human-specific molecular markers are sensitive measures of fresh sewage pollution, and the HS-HF183 marker appears to be the most sensitive among these markers in terms of detecting fresh sewage. However, the presence of the HS-HF183 marker in environmental waters may not necessarily indicate the presence of enteric viruses due to their high abundance in sewage compared to enteric viruses. More research is required on the persistency of these markers in environmental water samples in relation to traditional fecal indicators and enteric pathogens.
Resumo:
This study considers the solution of a class of linear systems related with the fractional Poisson equation (FPE) (−∇2)α/2φ=g(x,y) with nonhomogeneous boundary conditions on a bounded domain. A numerical approximation to FPE is derived using a matrix representation of the Laplacian to generate a linear system of equations with its matrix A raised to the fractional power α/2. The solution of the linear system then requires the action of the matrix function f(A)=A−α/2 on a vector b. For large, sparse, and symmetric positive definite matrices, the Lanczos approximation generates f(A)b≈β0Vmf(Tm)e1. This method works well when both the analytic grade of A with respect to b and the residual for the linear system are sufficiently small. Memory constraints often require restarting the Lanczos decomposition; however this is not straightforward in the context of matrix function approximation. In this paper, we use the idea of thick-restart and adaptive preconditioning for solving linear systems to improve convergence of the Lanczos approximation. We give an error bound for the new method and illustrate its role in solving FPE. Numerical results are provided to gauge the performance of the proposed method relative to exact analytic solutions.
Resumo:
1. Ecological data sets often use clustered measurements or use repeated sampling in a longitudinal design. Choosing the correct covariance structure is an important step in the analysis of such data, as the covariance describes the degree of similarity among the repeated observations. 2. Three methods for choosing the covariance are: the Akaike information criterion (AIC), the quasi-information criterion (QIC), and the deviance information criterion (DIC). We compared the methods using a simulation study and using a data set that explored effects of forest fragmentation on avian species richness over 15 years. 3. The overall success was 80.6% for the AIC, 29.4% for the QIC and 81.6% for the DIC. For the forest fragmentation study the AIC and DIC selected the unstructured covariance, whereas the QIC selected the simpler autoregressive covariance. Graphical diagnostics suggested that the unstructured covariance was probably correct. 4. We recommend using DIC for selecting the correct covariance structure.
Resumo:
Nanocomposite membranes are fabricated from sodalite nanocrystals (Sod-N) dispersed in BTDA-MDA polyimide matrices and then characterized structurally and for gas separation. No voids are found upon investigation of the interfacial contact between the inorganic and organic phases, even at a Sod-N loading of up to 35 wt.%. This is due to the functionalization of the zeolite nanocrystals with amino groups (==Si_(CH3)(CH2)3NH2), which covalently link the particles to the polyimide chains in the matrices. The addition of Sod-N increases the hydrogen-gas permeability of the membranes, while nitrogen permeability decreases. Overall, these nanocomposite membranes display substantial selectivity improvements. The sodalite–polyimide membrane containing 35 wt.% Sod-N has a hydrogen permeability of 8.0 Barrers and a H2/N2 ideal selectivity of 281 at 25 C whereas the plain polyimide membrane exhibits a hydrogen permeability of 7.0 Barrers and a H2/N2 ideal selectivity of 198 at the same testing temperature.
Resumo:
The purpose of the present study was to examine the role of fluid (gf), social (SI) and emotional intelligence (EI) in faking the Beck Depression Inventory (2nd ed., BDI-II). Twenty-two students and 26 non-students completed Raven’s Advanced Progressive Matrices (APM), a social insight test, the Schutte et al. self-report EI scale, and the BDI-II under honest and faking instructions. Results were consistent with a new model of successful faking, in which a participant’s original response must be manipulated into a strategic response, which must match diagnostic criteria. As hypothesised, the BDI-II could be faked, and gf was not related to faking ability. Counter to expectations, however, SI and EI were not related to faking ability. A second study explored why EI failed to facilitate faking. Forty-nine students and 50 non-students completed the EI measure, the Marlowe-Crown Scale and the Levenson et al. Psychopathy Scale. As hypothesised, EI was negatively correlated with psychopathy, but EI showed no relationship with socially desirable responding. It was concluded that in the first experiment, high-EI people did fake effectively, but high-psychopathy people (who had low EI) were also faking effectively, resulting in a distribution that showed no advantage to high EI individuals.