895 resultados para Markov chains. Convergence. Evolutionary Strategy. Large Deviations


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Centronuclear myopathy (CNM) is a genetically heterogeneous disorder associated with general skeletal muscle weakness, type I fiber predominance and atrophy, and abnormally centralized nuclei. Autosomal dominant CNM is due to mutations in the large GTPase dynamin 2 (DNM2), a mechanochemical enzyme regulating cytoskeleton and membrane trafficking in cells. To date, 40 families with CNM-related DNM2 mutations have been described, and here we report 60 additional families encompassing a broad genotypic and phenotypic spectrum. In total, 18 different mutations are reported in 100 families and our cohort harbors nine known and four new mutations, including the first splice-site mutation. Genotype-phenotype correlation hypotheses are drawn from the published and new data, and allow an efficient screening strategy for molecular diagnosis. In addition to CNM, dissimilar DNM2 mutations are associated with Charcot-Marie-Tooth (CMT) peripheral neuropathy (CMTD1B and CMT2M), suggesting a tissue-specific impact of the mutations. In this study, we discuss the possible clinical overlap of CNM and CMT, and the biological significance of the respective mutations based on the known functions of dynamin 2 and its protein structure. Defects in membrane trafficking due to DNM2 mutations potentially represent a common pathological mechanism in CNM and CMT. Hum Mutat 33: 949-959, 2012. (C) 2012 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Many important toxins and antibiotics are produced by non-ribosomal biosynthetic pathways. Microcystins are a chemically diverse family of potent peptide toxins and the end-products of a hybrid NRPS and PKS secondary metabolic pathway. They are produced by a variety of cyanobacteria and are responsible for the poisoning of humans as well as the deaths of wild and domestic animals around the world. The chemical diversity of the microcystin family is attributed to a number of genetic events that have resulted in the diversification of the pathway for microcystin assembly. Results Here, we show that independent evolutionary events affecting the substrate specificity of the microcystin biosynthetic pathway have resulted in convergence on a rare [D-Leu1] microcystin-LR chemical variant. We detected this rare microcystin variant from strains of the distantly related genera Microcystis, Nostoc, and Phormidium. Phylogenetic analysis performed using sequences of the catalytic domains within the mcy gene cluster demonstrated a clear recombination pattern in the adenylation domain phylogenetic tree. We found evidence for conversion of the gene encoding the McyA2 adenylation domain in strains of the genera Nostoc and Phormidium. However, point mutations affecting the substrate-binding sequence motifs of the McyA2 adenylation domain were associated with the change in substrate specificity in two strains of Microcystis. In addition to the main [D-Leu1] microcystin-LR variant, these two strains produced a new microcystin that was identified as [Met1] microcystin-LR. Conclusions Phylogenetic analysis demonstrated that both point mutations and gene conversion result in functional mcy gene clusters that produce the same rare [D-Leu1] variant of microcystin in strains of the genera Microcystis, Nostoc, and Phormidium. Engineering pathways to produce recombinant non-ribosomal peptides could provide new natural products or increase the activity of known compounds. Our results suggest that the replacement of entire adenylation domains could be a more successful strategy to obtain higher specificity in the modification of the non-ribosomal peptides than point mutations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background The structure of regulatory networks remains an open question in our understanding of complex biological systems. Interactions during complete viral life cycles present unique opportunities to understand how host-parasite network take shape and behave. The Anticarsia gemmatalis multiple nucleopolyhedrovirus (AgMNPV) is a large double-stranded DNA virus, whose genome may encode for 152 open reading frames (ORFs). Here we present the analysis of the ordered cascade of the AgMNPV gene expression. Results We observed an earlier onset of the expression than previously reported for other baculoviruses, especially for genes involved in DNA replication. Most ORFs were expressed at higher levels in a more permissive host cell line. Genes with more than one copy in the genome had distinct expression profiles, which could indicate the acquisition of new functionalities. The transcription gene regulatory network (GRN) for 149 ORFs had a modular topology comprising five communities of highly interconnected nodes that separated key genes that are functionally related on different communities, possibly maximizing redundancy and GRN robustness by compartmentalization of important functions. Core conserved functions showed expression synchronicity, distinct GRN features and significantly less genetic diversity, consistent with evolutionary constraints imposed in key elements of biological systems. This reduced genetic diversity also had a positive correlation with the importance of the gene in our estimated GRN, supporting a relationship between phylogenetic data of baculovirus genes and network features inferred from expression data. We also observed that gene arrangement in overlapping transcripts was conserved among related baculoviruses, suggesting a principle of genome organization. Conclusions Albeit with a reduced number of nodes (149), the AgMNPV GRN had a topology and key characteristics similar to those observed in complex cellular organisms, which indicates that modularity may be a general feature of biological gene regulatory networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work is to present the experience of workshops that have been developed at the University of Sao Paulo by the Integrated Library System in partnership with Research Commission. The poster presents the main results of workshops that were made in 2011, in two knowledge areas: life science and engineering, about science publication processes, and directed to graduates, pos-doctorates, researchers, professors and library staff. The realization of workshops made possible identifies gaps in different aspects of scholarly communication such as research planning, search information strategy, information organization, submission process, identification of journals with high impact, and so on, areas where professors and librarians can help. Besides, workshops reveal that the majority of participants believe in its importance. Despite the ubiquity of digital technology that transversely impacts all academic activities, it is imperative to promote efforts to find a convergence between information and media literacy in higher education and university research activities. This is particularly important when we talk about how science is produced, communicated and preserved for future use. In this scenario, libraries and librarians assume a new, more active and committed role.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growing demands for industrial products are imposing an increasingly intense level of competitiveness on the industrial operations. In the meantime, the convergence of information technology (IT) and automation technology (AT) is showing itself to be a tool of great potential for the modernization and improvement of industrial plants. However, for this technology fully to achieve its potential, several obstacles need to be overcome, including the demonstration of the reasoning behind estimations of benefits, investments and risks used to plan the implementation of corporative technology solutions. This article focuses on the evolutionary development of planning and adopting processes of IT & AT convergence. It proposes the incorporation of IT & AT convergence practices into Lean Thinking/Six Sigma, via the method used for planning the convergence of technological activities, known as the Smarter Operation Transformation (SOT) methodology. This article illustrates the SOT methodology through its application in a Brazilian company in the sector of consumer goods. In this application, it is shown that with IT & AT convergence is possible with low investment, in order to reduce the risk of not achieving the goals of key indicators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the Shannon mutual information of subsystems of critical quantum chains in their ground states. Our results indicate a universal leading behavior for large subsystem sizes. Moreover, as happens with the entanglement entropy, its finite-size behavior yields the conformal anomaly c of the underlying conformal field theory governing the long-distance physics of the quantum chain. We study analytically a chain of coupled harmonic oscillators and numerically the Q-state Potts models (Q = 2, 3, and 4), the XXZ quantum chain, and the spin-1 Fateev-Zamolodchikov model. The Shannon mutual information is a quantity easily computed, and our results indicate that for relatively small lattice sizes, its finite-size behavior already detects the universality class of quantum critical behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The seminal work of Horn and Schunck [8] is the first variational method for optical flow estimation. It introduced a novel framework where the optical flow is computed as the solution of a minimization problem. From the assumption that pixel intensities do not change over time, the optical flow constraint equation is derived. This equation relates the optical flow with the derivatives of the image. There are infinitely many vector fields that satisfy the optical flow constraint, thus the problem is ill-posed. To overcome this problem, Horn and Schunck introduced an additional regularity condition that restricts the possible solutions. Their method minimizes both the optical flow constraint and the magnitude of the variations of the flow field, producing smooth vector fields. One of the limitations of this method is that, typically, it can only estimate small motions. In the presence of large displacements, this method fails when the gradient of the image is not smooth enough. In this work, we describe an implementation of the original Horn and Schunck method and also introduce a multi-scale strategy in order to deal with larger displacements. For this multi-scale strategy, we create a pyramidal structure of downsampled images and change the optical flow constraint equation with a nonlinear formulation. In order to tackle this nonlinear formula, we linearize it and solve the method iteratively in each scale. In this sense, there are two common approaches: one that computes the motion increment in the iterations, like in ; or the one we follow, that computes the full flow during the iterations, like in. The solutions are incrementally refined ower the scales. This pyramidal structure is a standard tool in many optical flow methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The aim of this work is to propose a model for computing the optical flow in a sequence of images. We introduce a new temporal regularizer that is suitable for large displacements. We propose to decouple the spatial and temporal regularizations to avoid an incongruous formulation. For the spatial regularization we use the Nagel-Enkelmann operator and a newly designed temporal regularization. Our model is based on an energy functional that yields a partial differential equation (PDE). This PDE is embedded into a multipyramidal strategy to recover large displacements. A gradient descent technique is applied at each scale to reach the minimum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] In this paper we present a new model for optical flow calculation using a variational formulation which preserves discontinuities of the flow much better than classical methods. We study the Euler-Lagrange equations asociated to the variational problem. In the case of quadratic energy, we show the existence and uniqueness of the corresponding evolution problem. Since our method avoid linearization in the optical flow constraint, it can recover large displacement in the scene. We avoid convergence to irrelevant local minima by embedding our method into a linear scale-space framework and using a focusing strategy from coarse to fine scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Colloidal nanoparticles are additives to improve or modify several properties of thermoplastic or elastic polymers. Usually colloid-polymer mixtures show phase separation due to the depletion effect. The strategy to overcome this depletion demixing was to prepare surface-modified colloidal particles, which can be blended with linear polymer chains homogeneous. A successful synthesis strategy for the preparation of hairy nanospheres was developed by grafting polystyrene macromonomer chains onto polyorganosiloxane microgels. The number of hairs per particle with a core radius of approximately 10nm exceeded 150 hairs in all cases. The molecular weight of the hairs variied between 4000-18000g/mol.The compatibility of these hairy spheres mixed with linear polymer chains was investigated by AFM, TEM and SAXS. Homogeneous mixtures were found if the molecular weight of the polymer hairs on the particle surface is at least as large as the molecular weight of the matrix chains. If the chains are much shorter than the hairs, the colloidal hair corona is strongly swollen by the matrix polymer, leading to a long-range soft interparticle repulsion ('wet brush'). If hairs and chains are comparable in length, the corona shows much less volume swelling, leading to a short-range repulsive potential similar to hard sphere systems ('dry brush'). Polymerketten und Kolloidpartikel entmischen aufgrund von Depletion-Wechselwirkungen. Diese entropisch bedingte Entmischung konnte durch das Ankoppeln von Polymerhaaren verschiedenen Molekulargewichts auf die Kugeloberflächen der Kolloide bis zu hohen Konzentrationen vermieden werden. Zur Darstellung sphärischer Bürsten und haariger Tracerpartikel wurde eine neue Synthesestrategie ausgearbeitet und erfolgreich umgesetzt.Das Kompatibilitätsverhalten dieser sphärischen Bürsten in der Schmelze von Polymerketten als Matrix wurde mittels Elektronenmikroskopie und Kleinwinkelröntgenstreuung untersucht. Die Mischungen setzten sich aus sphärischen Bürsten und Matrixketten mit unterschiedlichen Molekulargewichten zusammen.Es zeigte sich, daß die Mischbarkeit entschieden durch das Verhältnis von Haarlänge zu Länge der Matrixketten beeinflußt wird.Aus den Untersuchungen des Relaxationsverhaltens mittels Rheologie und SAXS ergibt sich, daß das Konzept der 'dry brush'- und 'wet brush'-Systeme auf diese Mischungen übertragbar ist. Die Volumenquellung der Haarcorona durch die Matrixketten ist, wie die Experimente gezeigt haben, bereits im Fall von Polymeren mit relativ niedrigen Molekulargewichten zu beobachten. Sie ist umso stärker ausgeprägt, je größer das Längenverhältnis zwischen Polymerhaaren und Matrixketten ist. Die Quellung bedeutet eine Vergrößerung des effektiven Radius der Partikel und entspricht somit einer Erhöhung des effektiven Volumenbruchs. Dies führt zur Ausbildung einer höheren Ordnung und zu einem Einfrieren der Relaxation dieser strukturellen Ordnung führt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a large number of problems the high dimensionality of the search space, the vast number of variables and the economical constrains limit the ability of classical techniques to reach the optimum of a function, known or unknown. In this thesis we investigate the possibility to combine approaches from advanced statistics and optimization algorithms in such a way to better explore the combinatorial search space and to increase the performance of the approaches. To this purpose we propose two methods: (i) Model Based Ant Colony Design and (ii) Naïve Bayes Ant Colony Optimization. We test the performance of the two proposed solutions on a simulation study and we apply the novel techniques on an appplication in the field of Enzyme Engineering and Design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of new columnar discotic liquid crystalline materials based on the superphenalene (C96) core has been synthesized by oxidative cyclodehydrogenation with iron(III) chloride of suitable three-dimensional oligophenylene precursors. These compounds were investigated by means of differential scanning calorimetry (DSC), polarized optical microscopy (POM) and wide angle X-ray scattering (WAXS), and showed highly ordered supramolecular arrays and mesophase behavior over a broad temperature range. Good solubility, through the introduction of long alkyl chains, and the fact that these new superphenalene derivatives were found to be liquid crystalline at room temperature enabled the formation of highly ordered films (using the zone-casting technique), a requirement for application in organic electronic devices. The one-dimensional, intracolumnar charge carrier mobilities of superphenalene derivatives were determined using the pulse-radiolysis time-resolved microwave conductivity technique (PR-TRMC). Electrical properties of different C96-C12 architectures on mica surfaces were examined by using Electrostatic Force Microscopy (EFM) and Kelvin Probe Force Microscopy (KPFM). Hexa-peri-hexabenzocoronene (C42) derivatives substituted at the periphery with six branched alkyl ether chains were also synthesized. It was found that the introduction of ether groups within the side chains enhances the affinity of the discotic molecules towards polar surfaces, resulting in homeotropic self-assembly (as shown by POM and 2D-WAXS) when the compounds are processed from the isotropic state between two surfaces. A new, insoluble, superphenalene building block bearing six reactive sites was prepared, and was further used for the preparation of dendronized superphenalenes with bulky dendritic substituents around the core. UV/Vis and fluorescence experiments suggest reduced π-π stacking of the superphenalene cores as a result of steric hindrance between the peripheral dendritic units. A new family of graphitic molecules with partial ”zig-zag” periphery has been established. The incorporation of ”zig-zag” edges was shown to have a strong influence on the electronic properties of the new molecules (as studied by solution and solid-state UV/Vis, and fluorescence spectroscopy), leading to a significant bathochromic shift with respect to the parent PAHs (C42 and C96). The reactivity of the additional double bonds was examined. The attachment of long alkyl chains to a ”zig-zag” superphenalene core afforded a new, processable, liquid crystalline material.