28 resultados para Permutation-Symmetric Covariance
Resumo:
We produce five flavour models for the lepton sector. All five models fit perfectly well - at the 1 sigma level - the existing data on the neutrino mass-squared differences and on the lepton mixing angles. The models are based on the type I seesaw mechanism, on a Z(2) symmetry for each lepton flavour, and either on a (spontaneously broken) symmetry under the interchange of two lepton flavours or on a (spontaneously broken) CP symmetry incorporating that interchange - or on both symmetries simultaneously. Each model makes definite predictions both for the scale of the neutrino masses and for the phase delta in lepton mixing; the fifth model also predicts a correlation between the lepton mixing angles theta(12) and theta(23).
Resumo:
Previous Monte Carlo studies have investigated the multileaf collimator (MLC) contribution to the build-up region for fields in which the MLC leaves were fully blocking the openings defined by the collimation jaws. In the present work, we investigate the same effect but for symmetric and asymmetric MLC defined field sizes (2×2, 4×4, 10×10 and 3×7 cm2). A Varian 2100C/D accelerator with 120-leaf MLC is accurately modeled fora6MVphoton beam using the BEAMnrc/EGSnrc code. Our results indicate that particles scattered from accelerator head and MLC are responsible for the increase of about 7% on the surface dose when comparing 2×2 and 10×10 cm2 fields. We found that the MLC contribution to the total build-up dose is about 2% for the 2×2 cm2 field and less than 1% for the largest fields.
Resumo:
In the literature, concepts of “polyneuropathy”, “peripheral neuropathy” and “neuropathy” are often mistakenly used as synonyms. Polyneuropathy is a specific term that refers to a relatively homogenous process that affects multiple peripheral nerves. Most of these tend to present as symmetric polyneuropathies that first manifest in the distal portions of the affected nerves. Many of these distal symmetric polyneuropathies are due to toxic-metabolic causes such as alcohol abuse and diabetes mellitus. Other distal symmetric polyneuropathies may result from an overproduction of substances that result in nerve pathology such as is observed in anti-MAG neuropathy and monoclonal gammopathy of undetermined significance. Other “overproduction” disorders are hereditary such as noted in the Portuguese type of familial amyloid polyneuropathy (FAP). FAP is a manifestation of a group of hereditary amyloidoses; an autosomal dominant, multisystemic disorder wherein the mutant amyloid precursor, transthyretin, is produced in excess primarily by the liver. The liver accounts for approximately 98% of all transthyretin production. FAP is confirmed by detecting a transthyretin variant with a methionine for valine substitution at position 30 [TTR (Met30)]. Familial Amyloidotic Polyneuropathy (FAP) – Portuguese type was first described by a Portuguese neurologist, Corino de Andrade in 1939 and published in 1951. Most persons with this disorder are descended from Portuguese sailors who sired offspring in various locations, primarily in Sweden, Japan and Mallorca. Their descendants emigrated worldwide such that this disorder has been reported in other countries as well. More than 2000 symptomatic cases have been reported in Portugal. FAP progresses rapidly with an average time course from symptom onset to multi-organ involvement and death between ten and twenty years. Treatments directed at removing this aberrant protein such as plasmapheresis and immunoadsorption proved to be unsuccessful. Liver transplantation has been the only effective solution as evidenced by almost 2000 liver transplants performed worldwide. A therapy for FAP with a novel agent, “Tafamidis” has shown some promise in ongoing phase III clinical trials. It is well recognized that regular physical activity of moderate intensity has a positive effect on physical fitness as gauged by body composition, aerobic capacity, muscular strength and endurance and flexibility. Physical fitness has been reported to result in the reduction of symptoms and lesser impairment when performing activities of daily living. Exercise has been advocated as part of a comprehensive approach to the treatment of chronic diseases. Therefore, this chapter concludes with a discussion of the role of exercise training on FAP.
Resumo:
O estudo teve como objectivo comparar o impacto do estigma e do bem-estar subjectivo em pessoas com diferentes doenças crónicas. Foram avaliados 729 doentes, recrutados em hospitais de Portugal, que após o diagnóstico retomaram a sua vida normal. Controlando para um conjunto de variáveis sócio-demográficas e clínicas, a aplicação de Modelos de Análise de Covariância Multivariada, permitiu verificar diferenças significativas apenas para a percepção do estigma entre os grupos de doenças crónicas. Pessoas com obesidade, epilepsia e esclerose múltipla referem mais estigma e pessoas com diabetes tipo1 e miastenia gravis referem menos estigma.
Resumo:
We present a new model of the lepton sector that uses a family symmetry A(4) to make predictions for lepton mixing which are invariant under any permutation of the three flavours. We show that those predictions broadly agree with the experimental data, leading to a largish sin(2)theta(12) greater than or similar to 0.34, to vertical bar cos delta vertical bar greater than or similar to 0.7, and to vertical bar 0.5 - sin(2)theta(23)vertical bar greater than or similar to 0.08; cos delta and 0.5 - sin(2)theta(23) are predicted to have identical signs. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
We consider a general coupling of two identical chaotic dynamical systems, and we obtain the conditions for synchronization. We consider two types of synchronization: complete synchronization and delayed synchronization. Then, we consider four different couplings having different behaviors regarding their ability to synchronize either completely or with delay: Symmetric Linear Coupled System, Commanded Linear Coupled System, Commanded Coupled System with delay and symmetric coupled system with delay. The values of the coupling strength for which a coupling synchronizes define its Window of synchronization. We obtain analytically the Windows of complete synchronization, and we apply it for the considered couplings that admit complete synchronization. We also obtain analytically the Window of chaotic delayed synchronization for the only considered coupling that admits a chaotic delayed synchronization, the commanded coupled system with delay. At last, we use four different free chaotic dynamics (based in tent map, logistic map, three-piecewise linear map and cubic-like map) in order to observe numerically the analytically predicted windows.
Resumo:
In this article, we calibrate the Vasicek interest rate model under the risk neutral measure by learning the model parameters using Gaussian processes for machine learning regression. The calibration is done by maximizing the likelihood of zero coupon bond log prices, using mean and covariance functions computed analytically, as well as likelihood derivatives with respect to the parameters. The maximization method used is the conjugate gradients. The only prices needed for calibration are zero coupon bond prices and the parameters are directly obtained in the arbitrage free risk neutral measure.
Resumo:
Brain dopamine transporters imaging by Single Emission Tomography (SPECT) with 123I-FP-CIT (DaTScanTM) has become an important tool in the diagnosis and evaluation of Parkinson syndromes.This diagnostic method allows the visualization of a portion of the striatum – where healthy pattern resemble two symmetric commas - allowing the evaluation of dopamine presynaptic system, in which dopamine transporters are responsible for dopamine release into the synaptic cleft, and their reabsorption into the nigrostriatal nerve terminals, in order to be stored or degraded. In daily practice for assessment of DaTScan TM, it is common to rely only on visual assessment for diagnosis. However, this process is complex and subjective as it depends on the observer’s experience and it is associated with high variability intra and inter observer. Studies have shown that semiquantification can improve the diagnosis of Parkinson syndromes. For semiquantification, analysis methods of image segmentation using regions of interest (ROI) are necessary. ROIs are drawn, in specific - striatum - and in nonspecific – background – uptake areas. Subsequently, specific binding ratios are calculated. Low adherence of semiquantification for diagnosis of Parkinson syndromes is related, not only with the associated time spent, but also with the need of an adapted database of reference values for the population concerned, as well as, the examination of each service protocol. Studies have concluded, that this process increases the reproducibility of semiquantification. The aim of this investigation was to create and validate a database of healthy controls for Dopamine transporters with DaTScanTM named DBRV. The created database has been adapted to the Nuclear Medicine Department’s protocol, and the population of Infanta Cristina’s Hospital located in Badajoz, Spain.
Resumo:
Several popular Ansatze of lepton mass matrices that contain texture zeros are confronted with current neutrino observational data. We perform a systematic chi(2) analysis in a wide class of schemes, considering arbitrary Hermitian charged-lepton mass matrices and symmetric mass matrices for Majorana neutrinos or Hermitian mass matrices for Dirac neutrinos. Our study reveals that several patterns are still consistent with all the observations at the 68.27% confidence level, while some others are disfavored or excluded by the experimental data. The well-known Frampton-Glashow-Marfatia two-zero textures, hybrid textures, and parallel structures (among others) are considered.
Resumo:
We have calculated the equilibrium shape of the axially symmetric meniscus along which a spherical bubble contacts a flat liquid surface by analytically integrating the Young-Laplace equation in the presence of gravity, in the limit of large Bond numbers. This method has the advantage that it provides semianalytical expressions for key geometrical properties of the bubble in terms of the Bond number. Results are in good overall agreement with experimental data and are consistent with fully numerical (Surface Evolver) calculations. In particular, we are able to describe how the bubble shape changes from hemispherical, with a flat, shallow bottom, to lenticular, with a deeper, curved bottom, as the Bond number is decreased.
Resumo:
We consider a general coupling of two identical chaotic dynamical systems, and we obtain the conditions for synchronization. We consider two types of synchronization: complete synchronization and delayed synchronization. Then, we consider four different couplings having different behaviors regarding their ability to synchronize either completely or with delay: Symmetric Linear Coupled System, Commanded Linear Coupled System, Commanded Coupled System with delay and symmetric coupled system with delay. The values of the coupling strength for which a coupling synchronizes define its Window of synchronization. We obtain analytically the Windows of complete synchronization, and we apply it for the considered couplings that admit complete synchronization. We also obtain analytically the Window of chaotic delayed synchronization for the only considered coupling that admits a chaotic delayed synchronization, the commanded coupled system with delay. At last, we use four different free chaotic dynamics (based in tent map, logistic map, three-piecewise linear map and cubic-like map) in order to observe numerically the analytically predicted windows.
Resumo:
The optimal design of cold-formed steel columns is addressed in this paper, with two objectives: maximize the local-global buckling strength and maximize the distortional buckling strength. The design variables of the problem are the angles of orientation of cross-section wall elements the thickness and width of the steel sheet that forms the cross-section are fixed. The elastic local, distortional and global buckling loads are determined using Finite Strip Method (CUFSM) and the strength of cold-formed steel columns (with given length) is calculated using the Direct Strength Method (DSM). The bi-objective optimization problem is solved using the Direct MultiSearch (DMS) method, which does not use any derivatives of the objective functions. Trade-off Pareto optimal fronts are obtained separately for symmetric and anti-symmetric cross-section shapes. The results are analyzed and further discussed, and some interesting conclusions about the individual strengths (local-global and distortional) are found.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.