909 resultados para supramolecular architectures
Resumo:
This minireview is meant as an introduction to the following paper. To this end, it presents the general background against which the joint paper should be understood. The first objective of the present paper is thus to clarify some concepts and related terminology, drawing a clear distinction between i) atomic diversity (i.e., atomic-property space), ii) molecular or macromolecular diversity (i.e., molecular- or macromolecular-property spaces), and iii) chemical diversity (i.e., chemical-diversity space). The first refers to the various electronic states an atom can occupy. The second encompasses the conformational and property spaces of a given (macro)molecule. The third pertains to the diversity in structure and properties exhibited by a library or a supramolecular assembly of different chemical compounds. The ground is thus laid for the content of the joint paper, which pertains to case ii, to be placed in its broader chemodiversity context. The second objective of this paper is to point to the concepts of chemodiversity and biodiversity as forming a continuum. Chemodiversity is indeed the material substratum of organisms. In other words, chemodiversity is the material condition for life to emerge and exist. Increasing our knowledge of chemodiversity is thus a condition for a better understanding of life as a process.
Resumo:
It has been repeatedly debated which strategies people rely on in inference. These debates have been difficult to resolve, partially because hypotheses about the decision processes assumed by these strategies have typically been formulated qualitatively, making it hard to test precise quantitative predictions about response times and other behavioral data. One way to increase the precision of strategies is to implement them in cognitive architectures such as ACT-R. Often, however, a given strategy can be implemented in several ways, with each implementation yielding different behavioral predictions. We present and report a study with an experimental paradigm that can help to identify the correct implementations of classic compensatory and non-compensatory strategies such as the take-the-best and tallying heuristics, and the weighted-linear model.
Resumo:
Background: The arrangement of regulatory motifs in gene promoters, or promoterarchitecture, is the result of mutation and selection processes that have operated over manymillions of years. In mammals, tissue-specific transcriptional regulation is related to the presence ofspecific protein-interacting DNA motifs in gene promoters. However, little is known about therelative location and spacing of these motifs. To fill this gap, we have performed a systematic searchfor motifs that show significant bias at specific promoter locations in a large collection ofhousekeeping and tissue-specific genes.Results: We observe that promoters driving housekeeping gene expression are enriched inparticular motifs with strong positional bias, such as YY1, which are of little relevance in promotersdriving tissue-specific expression. We also identify a large number of motifs that show positionalbias in genes expressed in a highly tissue-specific manner. They include well-known tissue-specificmotifs, such as HNF1 and HNF4 motifs in liver, kidney and small intestine, or RFX motifs in testis,as well as many potentially novel regulatory motifs. Based on this analysis, we provide predictionsfor 559 tissue-specific motifs in mouse gene promoters.Conclusion: The study shows that motif positional bias is an important feature of mammalianproximal promoters and that it affects both general and tissue-specific motifs. Motif positionalconstraints define very distinct promoter architectures depending on breadth of expression andtype of tissue.
Resumo:
Monodispersed colloidal crystals based on silica sub-micrometric particles were synthesized using the Stöber-Fink-Bohn process. The control of nucleation and coalescence result in improved characteristics such as high sphericity and very low size dispersion. The resulting silica particles show characteristics suitable for self-assembling across large areas of closely-packed 2D crystal monolayers by an accurate Langmuir-Blodgett deposition process on glass, fused silica and silicon substrates. Due to their special optical properties, colloidal films have potential applications in fields including photonics, electronics, electro-optics, medicine (detectors and sensors), membrane filters and surface devices. The deposited monolayers of silica particles were characterized by means of FESEM, AFM and optical transmittance measurements in order to analyze their specific properties and characteristics. We propose a theoretical calculation for the photonic band gaps in 2D systems using an extrapolation of the photonic behavior of the crystal from 3D to 2D. In this work we show that the methodology used and the conditions in self-assembly processes are decisive for producing high-quality two-dimensional colloidal crystals by the Langmuir-Blodgett technique.
Resumo:
The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.
Resumo:
quantiNemo is an individual-based, genetically explicit stochastic simulation program. It was developed to investigate the effects of selection, mutation, recombination and drift on quantitative traits with varying architectures in structured populations connected by migration and located in a heterogeneous habitat. quantiNemo is highly flexible at various levels: population, selection, trait(s) architecture, genetic map for QTL and/or markers, environment, demography, mating system, etc. quantiNemo is coded in C++ using an object-oriented approach and runs on any computer platform. Availability: Executables for several platforms, user's manual, and source code are freely available under the GNU General Public License at http://www2.unil.ch/popgen/softwares/quantinemo.
Resumo:
We uncover the global organization of clustering in real complex networks. To this end, we ask whether triangles in real networks organize as in maximally random graphs with given degree and clustering distributions, or as in maximally ordered graph models where triangles are forced into modules. The answer comes by way of exploring m-core landscapes, where the m-core is defined, akin to the k-core, as the maximal subgraph with edges participating in at least m triangles. This property defines a set of nested subgraphs that, contrarily to k-cores, is able to distinguish between hierarchical and modular architectures. We find that the clustering organization in real networks is neither completely random nor ordered although, surprisingly, it is more random than modular. This supports the idea that the structure of real networks may in fact be the outcome of self-organized processes based on local optimization rules, in contrast to global optimization principles.
Resumo:
This work focuses on the prediction of the two main nitrogenous variables that describe the water quality at the effluent of a Wastewater Treatment Plant. We have developed two kind of Neural Networks architectures based on considering only one output or, in the other hand, the usual five effluent variables that define the water quality: suspended solids, biochemical organic matter, chemical organic matter, total nitrogen and total Kjedhal nitrogen. Two learning techniques based on a classical adaptative gradient and a Kalman filter have been implemented. In order to try to improve generalization and performance we have selected variables by means genetic algorithms and fuzzy systems. The training, testing and validation sets show that the final networks are able to learn enough well the simulated available data specially for the total nitrogen
Resumo:
Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.
Resumo:
Understanding the molecular underpinnings of evolutionary adaptations is a central focus of modern evolutionary biology. Recent studies have uncovered a panoply of complex phenotypes, including locally adapted ecotypes and cryptic morphs, divergent social behaviours in birds and insects, as well as alternative metabolic pathways in plants and fungi, that are regulated by clusters of tightly linked loci. These 'supergenes' segregate as stable polymorphisms within or between natural populations and influence ecologically relevant traits. Some supergenes may span entire chromosomes, because selection for reduced recombination between a supergene and a nearby locus providing additional benefits can lead to locus expansions with dynamics similar to those known for sex chromosomes. In addition to allowing for the co-segregation of adaptive variation within species, supergenes may facilitate the spread of complex phenotypes across species boundaries. Application of new genomic methods is likely to lead to the discovery of many additional supergenes in a broad range of organisms and reveal similar genetic architectures for convergently evolved phenotypes.
Resumo:
We describe a series of experiments in which we start with English to French and English to Japanese versions of an Open Source rule-based speech translation system for a medical domain, and bootstrap correspondign statistical systems. Comparative evaluation reveals that the rule-based systems are still significantly better than the statistical ones, despite the fact that considerable effort has been invested in tuning both the recognition and translation components; also, a hybrid system only marginally improved recall at the cost of a los in precision. The result suggests that rule-based architectures may still be preferable to statistical ones for safety-critical speech translation tasks.
Resumo:
A particular property of the matched desiredimpulse response receiver is introduced in this paper, namely,the fact that full exploitation of the diversity is obtained withmultiple beamformers when the channel is spatially and timelydispersive. This particularity makes the receiver specially suitablefor mobile and underwater communications. The new structureprovides better performance than conventional and weightedVRAKE receivers, and a diversity gain with no needs of additionalradio frequency equipment. The baseband hardware neededfor this new receiver may be obtained through reconfigurabilityof the RAKE architectures available at the base station. Theproposed receiver is tested through simulations assuming UTRAfrequency-division-duplexing mode.
Resumo:
The application of adaptive antenna techniques to fixed-architecture base stations has been shown to offer wide-ranging benefits, including interference rejection capabilities or increased coverage and spectral efficiency.Unfortunately, the actual implementation ofthese techniques to mobile communication scenarios has traditionally been set back by two fundamental reasons. On one hand, the lack of flexibility of current transceiver architectures does not allow for the introduction of advanced add-on functionalities. On the other hand, theoften oversimplified models for the spatiotemporal characteristics of the radio communications channel generally give rise toperformance predictions that are, in practice, too optimistic. The advent of software radio architectures represents a big step toward theintroduction of advanced receive/transmitcapabilities. Thanks to their inherent flexibilityand robustness, software radio architecturesare the appropriate enabling technology for theimplementation of array processing techniques.Moreover, given the exponential progression ofcommunication standards in coexistence andtheir constant evolution, software reconfigurabilitywill probably soon become the only costefficientalternative for the transceiverupgrade. This article analyzes the requirementsfor the introduction of software radio techniquesand array processing architectures inmultistandard scenarios. It basically summarizesthe conclusions and results obtained withinthe ACTS project SUNBEAM,1 proposingalgorithms and analyzing the feasibility ofimplementation of innovative and softwarereconfigurablearray processing architectures inmultistandard settings.
Beyond EA Frameworks: Towards an Understanding of the Adoption of Enterprise Architecture Management
Resumo:
Enterprise architectures (EA) are considered promising approaches to reduce the complexities of growing information technology (IT) environments while keeping pace with an ever-changing business environment. However, the implementation of enterprise architecture management (EAM) has proven difficult in practice. Many EAM initiatives face severe challenges, as demonstrated by the low usage level of enterprise architecture documentation and enterprise architects' lack of authority regarding enforcing EAM standards and principles. These challenges motivate our research. Based on three field studies, we first analyze EAM implementation issues that arise when EAM is started as a dedicated and isolated initiative. Following a design-oriented paradigm, we then suggest a design theory for architecture-driven IT management (ADRIMA) that may guide organizations to successfully implement EAM. This theory summarizes prescriptive knowledge related to embedding EAM practices, artefacts and roles in the existing IT management processes and organization.
Resumo:
Web-palvelut muodostavat keskeisen osan semanttista web:iä. Ne mahdollistavat nykyaikaisen ja tehokkaan välineistön hajautettuun laskentaan ja luovat perustan palveluperustaisille arkkitehtuureille. Verkottunut automatisoitu liiketoiminta edellyttää jatkuvaa aktiivisuutta kaikilta osapuolilta. Lisäksi sitä tukevan järjestelmäntulee olla joustava ja sen tulee tukea monipuolista toiminnallisuutta. Nämä tavoitteet voidaan saavuttamaan yhdistämällä web-palveluita. Yhdistämisprosessi muodostuu joukosta tehtäviä kuten esim. palveluiden mallintaminen, palveluiden koostaminen, palveluiden suorittaminen ja tarkistaminen. Työssä on toteutettu yksinkertainen liiketoimintaprosessi. Toteutuksen osalta tarkasteltiin vaihtoehtoisia standardeja ja toteutustekniikoita. Myös suorituksen optimointiin liittyvät näkökulmat pyrittiin ottamaan huomioon.