970 resultados para ONE-COMPONENT
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reectance spectra, called endmember signatures, and their corresponding fractional abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to nd a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources. In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (SNR) is high.
Resumo:
One of the core tasks of the virtual-manufacturing environment is to characterise the transformation of the state of material during each of the unit processes. This transformation in shape, material properties, etc. can only be reliably achieved through the use of models in a simulation context. Unfortunately, many manufacturing processes involve the material being treated in both the liquid and solid state, the trans-formation of which may be achieved by heat transfer and/or electro-magnetic fields. The computational modelling of such processes, involving the interactions amongst various interacting phenomena, is a consider-able challenge. However, it must be addressed effectively if Virtual Manufacturing Environments are to become a reality! This contribution focuses upon one attempt to develop such a multi-physics computational toolkit. The approach uses a single discretisation procedure and provides for direct interaction amongst the component phenomena. The need to exploit parallel high performance hardware is addressed so that simulation elapsed times can be brought within the realms of practicality. Examples of Multiphysics modelling in relation to shape casting, and solder joint formation reinforce the motivation for this work.
Resumo:
Performing Macroscopy in Pathology implies to plan and implement methods of selection, description and collection of biological material from human organs and tissues, actively contributing to the clinical pathology analysis by preparing macroscopic report and the collection and identification of fragments, according to the standardized protocols and recognizing the criteria internationally established for determining the prognosis. The Macroscopy in Pathology course is a full year program with theoretical and pratical components taught by Pathologists. It is divided by organ/system surgical pathology into weekly modules and includes a practical "hands-on" component in Pathology Departments. The students are 50 biomedical scientists aged from 22 to 50 years old from all across the country that want to acquire competences in macroscopy. A blended learning strategy was used in order to: give students the opportunity to attend from distance; support the contents, lessons and the interaction with colleagues and teachers; facilitate the formative/summative assessment.
Resumo:
LOPES-DOS-SANTOS, V. , CONDE-OCAZIONEZ, S. ; NICOLELIS, M. A. L. , RIBEIRO, S. T. , TORT, A. B. L. . Neuronal assembly detection and cell membership specification by principal component analysis. Plos One, v. 6, p. e20996, 2011.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2015.
Resumo:
As one of the newest members in the field of articial immune systems (AIS), the Dendritic Cell Algorithm (DCA) is based on behavioural models of natural dendritic cells (DCs). Unlike other AIS, the DCA does not rely on training data, instead domain or expert knowledge is required to predetermine the mapping between input signals from a particular instance to the three categories used by the DCA. This data preprocessing phase has received the criticism of having manually over-fitted the data to the algorithm, which is undesirable. Therefore, in this paper we have attempted to ascertain if it is possible to use principal component analysis (PCA) techniques to automatically categorise input data while still generating useful and accurate classication results. The integrated system is tested with a biometrics dataset for the stress recognition of automobile drivers. The experimental results have shown the application of PCA to the DCA for the purpose of automated data preprocessing is successful.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
Vigna unguiculata (L.) Walp (cowpea) is a food crop with high nutritional value that is cultivated throughout tropical and subtropical regions of the world. The main constraint on high productivity of cowpea is water deficit, caused by the long periods of drought that occur in these regions. The aim of the present study was to select elite cowpea genotypes with enhanced drought tolerance, by applying principal component analysis to 219 first-cycle progenies obtained in a recurrent selection program. The experimental design comprised a simple 15 x 15 lattice with 450 plots, each of two rows of 10 plants. Plants were grown under water-deficit conditions by applying a water depth of 205 mm representing one-half of that required by cowpea. Variables assessed were flowering, maturation, pod length, number and mass of beans/pod, mass of 100 beans, and productivity/plot. Ten elite cowpea genotypes were selected, in which principal components 1 and 2 encompassed variables related to yield (pod length, beans/pod, and productivity/plot) and life precocity (flowering and maturation), respectively.
Resumo:
A servo-controlled automatic machine can perform tasks that involve synchronized actuation of a significant number of servo-axes, namely one degree-of-freedom (DoF) electromechanical actuators. Each servo-axis comprises a servo-motor, a mechanical transmission and an end-effector, and is responsible for generating the desired motion profile and providing the power required to achieve the overall task. The design of a such a machine must involve a detailed study from a mechatronic viewpoint, due to its electric and mechanical nature. The first objective of this thesis is the development of an overarching electromechanical model for a servo-axis. Every loss source is taken into account, be it mechanical or electrical. The mechanical transmission is modeled by means of a sequence of lumped-parameter blocks. The electric model of the motor and the inverter takes into account winding losses, iron losses and controller switching losses. No experimental characterizations are needed to implement the electric model, since the parameters are inferred from the data available in commercial catalogs. With the global model at disposal, a second objective of this work is to perform the optimization analysis, in particular, the selection of the motor-reducer unit. The optimal transmission ratios that minimize several objective functions are found. An optimization process is carried out and repeated for each candidate motor. Then, we present a novel method where the discrete set of available motor is extended to a continuous domain, by fitting manufacturer data. The problem becomes a two-dimensional nonlinear optimization subject to nonlinear constraints, and the solution gives the optimal choice for the motor-reducer system. The presented electromechanical model, along with the implementation of optimization algorithms, forms a complete and powerful simulation tool for servo-controlled automatic machines. The tool allows for determining a wide range of electric and mechanical parameters and the behavior of the system in different operating conditions.
Resumo:
Neisseria meningitidis is a gram negative human obligated pathogen, mostly found as a commensal in the oropharyngeal mucosa of healthy individuals. It can invade this epithelium determining rare but devastating and fast progressing outcomes, such as meningococcal meningitidis and septicemia, leading to death (about 135000 per year worldwide). Conjugated vaccines for serogroups A, C, W135, X and Y were developed, while for N. meningitidis serogroup B (MenB) the vaccines were based on Outern Membrane Vesicles (OMV). One of them is the 4C-MenB (Bexsero). The antigens included in this vaccine’s formulation are, in addition to the OMV from New Zeland epidemic strain 98/254, three recombinant proteins: NadA, NHBA and fHbp. While the role of these recombinant components was deeply characterized, the vesicular contribution in 4C-MenB elicited protection is mediated mainly by porin A and other unidentified antigens. To unravel the relative contribution of these different antigens in eliciting protective antibody responses, we isolated human monoclonal antibodies (mAbs) from single-cell sorted plasmablasts of 3 adult vaccinees peripheral blood. mAbs have been screened for binding to 4C-MenB components by Luminex bead-based assay. OMV-specific mAbs were purified and tested for functionality by serum bactericidal assay (SBA) on 18 different MenB strains and characterized in a protein microarray containing a panel of prioritized meningococcal proteins. The bactericidal mAbs identified to recognize the outer membrane proteins PorA and PorB, stating the importance of PorB in cross-strain protection. In addition, RmpM, BamE, Hyp1065 and ComL were found as immunogenic components of the 4C-MenB vaccine.
Resumo:
Amphibians have been declining worldwide and the comprehension of the threats that they face could be improved by using mark-recapture models to estimate vital rates of natural populations. Recently, the consequences of marking amphibians have been under discussion and the effects of toe clipping on survival are debatable, although it is still the most common technique for individually identifying amphibians. The passive integrated transponder (PIT tag) is an alternative technique, but comparisons among marking techniques in free-ranging populations are still lacking. We compared these two marking techniques using mark-recapture models to estimate apparent survival and recapture probability of a neotropical population of the blacksmith tree frog, Hypsiboas faber. We tested the effects of marking technique and number of toe pads removed while controlling for sex. Survival was similar among groups, although slightly decreased from individuals with one toe pad removed, to individuals with two and three toe pads removed, and finally to PIT-tagged individuals. No sex differences were detected. Recapture probability slightly increased with the number of toe pads removed and was the lowest for PIT-tagged individuals. Sex was an important predictor for recapture probability, with males being nearly five times more likely to be recaptured. Potential negative effects of both techniques may include reduced locomotion and high stress levels. We recommend the use of covariates in models to better understand the effects of marking techniques on frogs. Accounting for the effect of the technique on the results should be considered, because most techniques may reduce survival. Based on our results, but also on logistical and cost issues associated with PIT tagging, we suggest the use of toe clipping with anurans like the blacksmith tree frog.
Resumo:
Streptococcus sanguinis is a commensal pioneer colonizer of teeth and an opportunistic pathogen of infectious endocarditis. The establishment of S. sanguinis in host sites likely requires dynamic fitting of the cell wall in response to local stimuli. In this study, we investigated the two-component system (TCS) VicRK in S. sanguinis (VicRKSs), which regulates genes of cell wall biogenesis, biofilm formation, and virulence in opportunistic pathogens. A vicK knockout mutant obtained from strain SK36 (SKvic) showed slight reductions in aerobic growth and resistance to oxidative stress but an impaired ability to form biofilms, a phenotype restored in the complemented mutant. The biofilm-defective phenotype was associated with reduced amounts of extracellular DNA during aerobic growth, with reduced production of H2O2, a metabolic product associated with DNA release, and with inhibitory capacity of S. sanguinis competitor species. No changes in autolysis or cell surface hydrophobicity were detected in SKvic. Reverse transcription-quantitative PCR (RT-qPCR), electrophoretic mobility shift assays (EMSA), and promoter sequence analyses revealed that VicR directly regulates genes encoding murein hydrolases (SSA_0094, cwdP, and gbpB) and spxB, which encodes pyruvate oxidase for H2O2 production. Genes previously associated with spxB expression (spxR, ccpA, ackA, and tpK) were not transcriptionally affected in SKvic. RT-qPCR analyses of S. sanguinis biofilm cells further showed upregulation of VicRK targets (spxB, gbpB, and SSA_0094) and other genes for biofilm formation (gtfP and comE) compared to expression in planktonic cells. This study provides evidence that VicRKSs regulates functions crucial for S. sanguinis establishment in biofilms and identifies novel VicRK targets potentially involved in hydrolytic activities of the cell wall required for these functions.
Resumo:
The 'dilution effect' (DE) hypothesis predicts that diverse host communities will show reduced disease. The underlying causes of pathogen dilution are complex, because they involve non-additive (driven by host interactions and differential habitat use) and additive (controlled by host species composition) mechanisms. Here, we used measures of complementarity and selection traditionally employed in the field of biodiversity-ecosystem function (BEF) to quantify the net effect of host diversity on disease dynamics of the amphibian-killing fungus Batrachochytrium dendrobatidis (Bd). Complementarity occurs when average infection load in diverse host assemblages departs from that of each component species in uniform populations. Selection measures the disproportionate impact of a particular species in diverse assemblages compared with its performance in uniform populations, and therefore has strong additive and non-additive properties. We experimentally infected tropical amphibian species of varying life histories, in single- and multi-host treatments, and measured individual Bd infection loads. Host diversity reduced Bd infection in amphibians through a mechanism analogous to complementarity (sensu BEF), potentially by reducing shared habitat use and transmission among hosts. Additionally, the selection component indicated that one particular terrestrial species showed reduced infection loads in diverse assemblages at the expense of neighbouring aquatic hosts becoming heavily infected. By partitioning components of diversity, our findings underscore the importance of additive and non-additive mechanisms underlying the DE.
Resumo:
In the title compound, C17H15NO4, the conformation about the C=C double bond [1.348 (2) Å] is E with the ketone group almost co-planar [C-C-C-C torsion angle = 7.2 (2)°] but the phenyl group twisted away [C-C-C-C = 160.93 (17)°]. The terminal aromatic rings are almost perpendicular to each other [dihedral angle = 81.61 (9)°] giving the mol-ecule an overall U-shape. The crystal packing feature benzene-C-H⋯O(ketone) contacts that lead to supra-molecular helical chains along the b axis. These are connected by π-π inter-actions between benzene and phenyl rings [inter-centroid distance = 3.6648 (14) Å], resulting in the formation of a supra-molecular layer in the bc plane.
Resumo:
In the title compound, C17H14N2O6, the conformation about the C=C double bond [1.345 (2) Å] is E, with the ketone moiety almost coplanar [C-C-C-C torsion angle = 9.5 (2)°] along with the phenyl ring [C-C-C-C = 5.9 (2)°]. The aromatic rings are almost perpendicular to each other [dihedral angle = 86.66 (7)°]. The 4-nitro moiety is approximately coplanar with the benzene ring to which it is attached [O-N-C-C = 4.2 (2)°], whereas the one in the ortho position is twisted [O-N-C-C = 138.28 (13)°]. The mol-ecules associate via C-H⋯O inter-actions, involving both O atoms from the 2-nitro group, to form a helical supra-molecular chain along [010]. Nitro-nitro N⋯O inter-actions [2.8461 (19) Å] connect the chains into layers that stack along [001].