935 resultados para structure-function map


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Arguably the most complex conical functions are seated in human cognition, the how and why of which have been debated for centuries by theologians, philosophers and scientists alike. In his best-selling book, An Astonishing Hypothesis: A Scientific Search for the Soul, Francis Crick refined the view that these qualities are determined solely by cortical cells and circuitry. Put simply, cognition is nothing more, or less, than a biological function. Accepting this to be the case, it should be possible to identify the mechanisms that subserve cognitive processing. Since the pioneering studies of Lorent de No and Hebb, and the more recent studies of Fuster, Miller and Goldman-Rakic, to mention but a few, much attention has been focused on the role of persistent neural activity in cognitive processes. Application of modern technologies and modelling techniques has led to new hypotheses about the mechanisms of persistent activity. Here I focus on how regional variations in the pyramidal cell phenotype may determine the complexity of cortical circuitry and, in turn, influence neural activity. Data obtained from thousands of individually injected pyramidal cells in sensory, motor, association and executive cortex reveal marked differences in the numbers of putative excitatory inputs received by these cells. Pyramidal cells in prefrontal cortex have, on average, up to 23 times more dendritic spines than those in the primary visual area. I propose that without these specializations in the structure of pyramidal cells, and the circuits they form, human cognitive processing would not have evolved to its present state. I also present data from both New World and Old World monkeys that show varying degrees of complexity in the pyramidal cell phenotype in their prefrontal cortices, suggesting that cortical circuitry and, thus, cognitive styles are evolving independently in different species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a reflection on Design education and specifically on the role of Drawing in this area. As a subject, Design has expanded its field of action expanding into new areas such as Experience Design or Service Design. It became necessary for the designer to have more than an education based on technological knowledge or know-how. Many authors like Meredith Davis, Don Norman or Jamie Hobson point out the urgency to review the curricula of Design courses because nowadays “… design is more than appearance, design is about interaction, about strategy and about services. Designers change social behavior” (Norman 2011). When shifting from a product-centered design to a person-centered design (in a structure, a service or in a relationship) what should the function of drawing in a design course be? What should its curriculum be? Our work methodology will be to confront today’s perspectives on design theory and practice in an attempt to add to the discussion on the methodological strategies in design teaching in the contemporary context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a reflection on Design education and specifically on the role of Drawing in this area. As a subject, Design has expanded its field of action expanding into new areas such as Experience Design or Service Design. It became necessary for the designer to have more than an education based on technological knowledge or know-how. Many authors like Meredith Davis, Don Norman or Jamie Hobson point out the urgency to review the curricula of Design courses because nowadays “ … design is more than appearance, design is about interaction, about strategy and about services. Designers change social behavior” (Norman, 2011) When shifting from a product-centered design to a person-centered design (in a structure, a service or in a relationship) what should the function of drawing in a design course be? What should its curriculum be? Our work methodology will be to confront today’s perspectives on design theory and practice in an attempt to add to the discussion on the methodological strategies in design teaching in the contemporary context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present new Rayleigh-wave dispersion maps of the western Iberian Peninsula for periods between 8 and 30 s, obtained from correlations of seismic ambient noise, following the recent increase in seismic broadband network density in Portugal and Spain. Group velocities have been computed for each station pair using the empirical Green's functions generated by cross-correlating one-day-length seismic ambient-noise records. The resulting high-path density allows us to obtain lateral variations of the group velocities as a function of period in cells of 0.5 degrees x 0.5 degrees with an unprecedented resolution. As a result we were able to address some of the unknowns regarding the lithospheric structure beneath SW Iberia. The dispersion maps allow the imaging of the major structural units, namely the Iberian Massif, and the Lusitanian and Algarve Meso-Cenozoic basins. The Cadiz Gulf/Gibraltar Strait area corresponds to a strong low-velocity anomaly, which can be followed to the largest period inverted, although slightly shifted to the east at longer periods. Within the Iberian Massif, second-order perturbations in the group velocities are consistent with the transitions between tectonic units composing the massif. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydroxycinnamic acids (HCAs) are important phytochemicals possessing significant biological properties. Several investigators have studied in vitro antioxidant activity of HCAs in detail. In this review, we have gathered the studies focused on the structure-activity relationships (SARs) of these compounds that have used medicinal chemistry to generate more potent antioxidant molecules. Most of the reports indicated that the presence of an unsaturated bond on the side chain of HCAs is vital to their activity. The structural features that were reported to be of importance to the antioxidant activity were categorized as follows: modifications of the aromatic ring, which include alterations in the number and position of hydroxy groups and insertion of electron donating or withdrawing moieties as well as modifications of the carboxylic function that include esterification and amidation process. Furthermore, reports that have addressed the influence of physicochemical properties including redox potential, lipid solubility and dissociation constant on the antioxidant activity were also summarized. Finally, the pro-oxidant effect of HCAs in some test systems was addressed. Most of the investigations concluded that the presence of ortho-dihydroxy phenyl group (catechol moiety) is of significant importance to the antioxidant activity, while, the presence of three hydroxy groups does not necessarily improve the activity. Optimization of the structure of molecular leads is an important task of modern medicinal chemistry and its accomplishment relies on the careful assessment of SARs. SAR studies on HCAs can identify the most successful antioxidants that could be useful for management of oxidative stress-related diseases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Thesis describes the application of automatic learning methods for a) the classification of organic and metabolic reactions, and b) the mapping of Potential Energy Surfaces(PES). The classification of reactions was approached with two distinct methodologies: a representation of chemical reactions based on NMR data, and a representation of chemical reactions from the reaction equation based on the physico-chemical and topological features of chemical bonds. NMR-based classification of photochemical and enzymatic reactions. Photochemical and metabolic reactions were classified by Kohonen Self-Organizing Maps (Kohonen SOMs) and Random Forests (RFs) taking as input the difference between the 1H NMR spectra of the products and the reactants. The development of such a representation can be applied in automatic analysis of changes in the 1H NMR spectrum of a mixture and their interpretation in terms of the chemical reactions taking place. Examples of possible applications are the monitoring of reaction processes, evaluation of the stability of chemicals, or even the interpretation of metabonomic data. A Kohonen SOM trained with a data set of metabolic reactions catalysed by transferases was able to correctly classify 75% of an independent test set in terms of the EC number subclass. Random Forests improved the correct predictions to 79%. With photochemical reactions classified into 7 groups, an independent test set was classified with 86-93% accuracy. The data set of photochemical reactions was also used to simulate mixtures with two reactions occurring simultaneously. Kohonen SOMs and Feed-Forward Neural Networks (FFNNs) were trained to classify the reactions occurring in a mixture based on the 1H NMR spectra of the products and reactants. Kohonen SOMs allowed the correct assignment of 53-63% of the mixtures (in a test set). Counter-Propagation Neural Networks (CPNNs) gave origin to similar results. The use of supervised learning techniques allowed an improvement in the results. They were improved to 77% of correct assignments when an ensemble of ten FFNNs were used and to 80% when Random Forests were used. This study was performed with NMR data simulated from the molecular structure by the SPINUS program. In the design of one test set, simulated data was combined with experimental data. The results support the proposal of linking databases of chemical reactions to experimental or simulated NMR data for automatic classification of reactions and mixtures of reactions. Genome-scale classification of enzymatic reactions from their reaction equation. The MOLMAP descriptor relies on a Kohonen SOM that defines types of bonds on the basis of their physico-chemical and topological properties. The MOLMAP descriptor of a molecule represents the types of bonds available in that molecule. The MOLMAP descriptor of a reaction is defined as the difference between the MOLMAPs of the products and the reactants, and numerically encodes the pattern of bonds that are broken, changed, and made during a chemical reaction. The automatic perception of chemical similarities between metabolic reactions is required for a variety of applications ranging from the computer validation of classification systems, genome-scale reconstruction (or comparison) of metabolic pathways, to the classification of enzymatic mechanisms. Catalytic functions of proteins are generally described by the EC numbers that are simultaneously employed as identifiers of reactions, enzymes, and enzyme genes, thus linking metabolic and genomic information. Different methods should be available to automatically compare metabolic reactions and for the automatic assignment of EC numbers to reactions still not officially classified. In this study, the genome-scale data set of enzymatic reactions available in the KEGG database was encoded by the MOLMAP descriptors, and was submitted to Kohonen SOMs to compare the resulting map with the official EC number classification, to explore the possibility of predicting EC numbers from the reaction equation, and to assess the internal consistency of the EC classification at the class level. A general agreement with the EC classification was observed, i.e. a relationship between the similarity of MOLMAPs and the similarity of EC numbers. At the same time, MOLMAPs were able to discriminate between EC sub-subclasses. EC numbers could be assigned at the class, subclass, and sub-subclass levels with accuracies up to 92%, 80%, and 70% for independent test sets. The correspondence between chemical similarity of metabolic reactions and their MOLMAP descriptors was applied to the identification of a number of reactions mapped into the same neuron but belonging to different EC classes, which demonstrated the ability of the MOLMAP/SOM approach to verify the internal consistency of classifications in databases of metabolic reactions. RFs were also used to assign the four levels of the EC hierarchy from the reaction equation. EC numbers were correctly assigned in 95%, 90%, 85% and 86% of the cases (for independent test sets) at the class, subclass, sub-subclass and full EC number level,respectively. Experiments for the classification of reactions from the main reactants and products were performed with RFs - EC numbers were assigned at the class, subclass and sub-subclass level with accuracies of 78%, 74% and 63%, respectively. In the course of the experiments with metabolic reactions we suggested that the MOLMAP / SOM concept could be extended to the representation of other levels of metabolic information such as metabolic pathways. Following the MOLMAP idea, the pattern of neurons activated by the reactions of a metabolic pathway is a representation of the reactions involved in that pathway - a descriptor of the metabolic pathway. This reasoning enabled the comparison of different pathways, the automatic classification of pathways, and a classification of organisms based on their biochemical machinery. The three levels of classification (from bonds to metabolic pathways) allowed to map and perceive chemical similarities between metabolic pathways even for pathways of different types of metabolism and pathways that do not share similarities in terms of EC numbers. Mapping of PES by neural networks (NNs). In a first series of experiments, ensembles of Feed-Forward NNs (EnsFFNNs) and Associative Neural Networks (ASNNs) were trained to reproduce PES represented by the Lennard-Jones (LJ) analytical potential function. The accuracy of the method was assessed by comparing the results of molecular dynamics simulations (thermal, structural, and dynamic properties) obtained from the NNs-PES and from the LJ function. The results indicated that for LJ-type potentials, NNs can be trained to generate accurate PES to be used in molecular simulations. EnsFFNNs and ASNNs gave better results than single FFNNs. A remarkable ability of the NNs models to interpolate between distant curves and accurately reproduce potentials to be used in molecular simulations is shown. The purpose of the first study was to systematically analyse the accuracy of different NNs. Our main motivation, however, is reflected in the next study: the mapping of multidimensional PES by NNs to simulate, by Molecular Dynamics or Monte Carlo, the adsorption and self-assembly of solvated organic molecules on noble-metal electrodes. Indeed, for such complex and heterogeneous systems the development of suitable analytical functions that fit quantum mechanical interaction energies is a non-trivial or even impossible task. The data consisted of energy values, from Density Functional Theory (DFT) calculations, at different distances, for several molecular orientations and three electrode adsorption sites. The results indicate that NNs require a data set large enough to cover well the diversity of possible interaction sites, distances, and orientations. NNs trained with such data sets can perform equally well or even better than analytical functions. Therefore, they can be used in molecular simulations, particularly for the ethanol/Au (111) interface which is the case studied in the present Thesis. Once properly trained, the networks are able to produce, as output, any required number of energy points for accurate interpolations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the first image of the Madeira upper crustal structure, using ambient seismic noise tomography. 16 months of ambient noise, recorded in a dense network of 26 seismometers deployed across Madeira, allowed reconstructing Rayleigh wave Green's functions between receivers. Dispersion analysis was performed in the short period band from 1.0 to 4.0 s. Group velocity measurements were regionalized to obtain 20 tomographic images, with a lateral resolution of 2.0 km in central Madeira. Afterwards, the dispersion curves, extracted from each cell of the 2D group velocity maps, were inverted as a function of depth to obtain a 3D shear wave velocity model of the upper crust, from the surface to a depth of 2.0 km. The obtained 3D velocity model reveals features throughout the island that correlates well with surface geology and island evolution. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the first image of the Madeira upper crustal structure, using ambient seismic noise tomography. 16 months of ambient noise, recorded in a dense network of 26 seismometers deployed across Madeira, allowed reconstructing Rayleigh wave Green's functions between receivers. Dispersion analysis was performed in the short period band from 1.0 to 4.0 s. Group velocity measurements were regionalized to obtain 20 tomographic images, with a lateral resolution of 2.0 km in central Madeira. Afterwards, the dispersion curves, extracted from each cell of the 2D group velocity maps, were inverted as a function of depth to obtain a 3D shear wave velocity model of the upper crust, from the surface to a depth of 2.0 km. The obtained 3D velocity model reveals features throughout the island that correlates well with surface geology and island evolution. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation presented to obtain the Ph.D degree in Biology

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation for the Master Degree in Structural and Functional Biochemistry

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A PhD Dissertation, presented as part of the requirements for the Degree of Doctor of Philosophy from the NOVA - School of Business and Economics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation presented to obtain the Ph.D degree in Biology, Microbial Biology

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cerebellum floccular complex lobes (FCLs) are housed in the FCL fossa of the periotic complex. There is experimental evidence indicating that the FCLs integrate visual and vestibular information, responsible for the vestibulo-ocular reflex, vestibulo-collic reflex, smooth pursuit and gaze holding. Thus, the behavior of extinct animals has been correlated with FCLs dimension in multiple paleoneuroanatomy studies. Here I analyzed braincase endocasts of a representative sample of Mammalia (48 species) and Aves (59 species) rendered using tomography and image segmentation and tested statistical correlations between the floccular complex volume, ecological and behavioral traits to assess various previously formulated paleobiological speculations. My results demonstrate: 1) there is no significant correlation between relative FCL volume and body mass; 2) there is no significant correlation between relative FCL and optic lobes size in birds; 3) average relative FCL size is larger in diurnal than in nocturnal birds but there is no statistically significant difference in mammals; 4) feeding strategies are related with different FCL size patterns in birds, but not in mammals; 5) locomotion type is not related with relative FCL size in mammals; 6) agility is not significantly correlated with FCL size in mammals. I conclude that, despite the apparent relation between FCL size and ecology in birds, the cerebellum of tetrapods is a highly plastic structure and may be adapted to control different functions across different taxonomic levels. For example, the european mole (Talpa europaea) which is fossorial and practically blind, has a FCL fossae relative size larger than those of bats, which are highly maneuverable. Therefore, variation in FCL size may be better explained by a combination of multiple factors with relation to anatomical and phylogenetic evolutionary constraints.