80 resultados para Gradient-based approaches
Resumo:
Inferring population admixture from genetic data and quantifying it is a difficult but crucial task in evolutionary and conservation biology. Unfortunately state-of-the-art probabilistic approaches are computationally demanding. Effectively exploiting the computational power of modern multiprocessor systems can thus have a positive impact to Monte Carlo-based simulation of admixture modeling. A novel parallel approach is briefly described and promising results on its message passing interface (MPI)-based C implementation are reported.
Resumo:
Attitudes to floristics have changed considerably during the past few decades as a result of increasing and often more focused consumer demands, heightened awareness of the threats to biodiversity, information flow and overload, and the application of electronic and web-based techniques to information handling and processing. This paper will examine these concerns in relation to our floristic knowledge and needs in the region of SW Asia. Particular reference will be made to the experience gained from the Euro+Med PlantBase project for the preparation of an electronic plant-information system for Europe and the Mediterranean, with a single core list of accepted plant names and synonyms, based on consensus taxonomy agreed by a specialist network. The many challenges Ð scientific, technical and organisational Ð that it has presented will be discussed as well as the problems of handling nontaxonomic information from fields such as conservation, karyology, biosystematics and mapping. The question of regional cooperation and the sharing of efforts and resources will also be raised and attention drawn to the recent planning workshop held in Rabat (May 2002) for establishing a technical cooperation network for taxonomic capacity building in North Africa as a possible model for the SW Asia region.
Resumo:
An amorphous, catechol-based analogue of PEEK ("o-PEEK") has been prepared by a classical step-growth polymerization reaction between catechol and 4,4'-difluorobenzophenone and shown to be readily soluble in a range of organic solvents. Copolymers with p-PEEK have been investigated, including an amorphous 50: 50 composition and a semicrystalline though still organic-soluble material comprising 70% p-PEEK. o-PEEK has also been obtained by entropy-driven ring-opening polymerization of the macrocyclic oligomers (MCO's) formed by cyclo-condensation of catechol with 4,4'-difluorobenzophenone under pseudo-high-dilution conditions. The principal products of this latter reaction were the cyclic dimer 3a (20 wt %), cyclic trimer 3b (16%) cyclic tetramer 3c (14%), cyclic pentamer 3d (13%) and cyclic hexamer 3e (12%). Macrocycles 3a-c were isolated as pure compounds by gradient column chromatography, and the structures of the cyclic dimer 3a and cyclic tetramer 3c were analyzed by single-crystal X-ray diffraction. A mixture of MCO's, 3, of similar composition, was obtained by cyclodepolymerization of high molar mass o-PEEK in dilute soluion.
Resumo:
This paper reviews four approaches used to create rational tools to aid the planning and the management of the building design process and then proposes a fifth approach. The new approach that has been developed is based on the mechanical aspects of technology rather than subjective design issues. The knowledge base contains, for each construction technology, a generic model of the detailed design process. Each activity in the process is specified by its input and output information needs. By connecting the input demands of one technology with the output supply from another technology a map or network of design activity is formed. Thus, it is possible to structure a specific model from the generic knowledge base within a KBE system.
Resumo:
Purpose – This paper seeks to examine the nature of “service innovation” in the facilities management (FM) context. It reviews recent thinking on “service innovation” as distinct from “product innovation”. Applying these contemporary perspectives it describes UK case studies of 11 innovations in different FM organisations. These include both in-house client-based innovations and third-party innovations. Design/methodology/approach – The study described in the paper encompasses 11 different innovations that constitute a mix of process, product and practice innovations. All of the innovations stem from UK-based organisations that were subject to in-depth interviews regarding the identification, screening, commitment of resources and implementation of the selected innovations. Findings – The research suggested that service innovation is highly active in the UK FM sector. However, the process of innovation rarely followed a common formalized path. Generally, the innovations were one-shot commitments at the early stage. None of the innovations studied failed to proceed to full adoption stage. This was either due to the reluctance of participating organisations to volunteer “tested but unsuccessful” innovations or the absence of any trial methods that might have exposed an innovations shortcomings. Research limitations/implications – The selection of innovations was restricted to the UK context. Moreover, the choice of innovations was partly determined by the innovating organisation. This selection process appeared to emphasise “one-shot” high profile technological innovations, typically associated with software. This may have been at the expense of less resource intensive, bottom-up innovations. Practical implications – This paper suggests that there is a role for “research and innovation” teams within larger FM organisations, whether they are client-based or third-party. Central to this philosophy is an approach that is open to the possibility of failure. The innovations studied were risk averse with a firm commitment to proceed at the early stage. Originality/value – This paper introduces new thinking on the subject of “service innovation” to the context of FM. It presents research and development as a planned solution to innovation. This approach will enable service organisations to fully test and exploit service innovations.
Resumo:
The aim of this review paper is to present experimental methodologies and the mathematical approaches used to determine effective diffusivities of solutes in food materials. The paper commences by describing the diffusion phenomena related to solute mass transfer in foods and effective diffusivities. It then focuses on the mathematical formulation for the calculation of effective diffusivities considering different diffusion models based on Fick's second law of diffusion. Finally, experimental considerations for effective diffusivity determination are elucidated primarily based on the acquirement of a series of solute content versus time curves appropriate to the equation model chosen. Different factors contributing to the determination of the effective diffusivities such as the structure of food material, temperature, diffusion solvent, agitation, sampling, concentration and different techniques used are considered. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
This paper formally derives a new path-based neural branch prediction algorithm (FPP) into blocks of size two for a lower hardware solution while maintaining similar input-output characteristic to the algorithm. The blocked solution, here referred to as B2P algorithm, is obtained using graph theory and retiming methods. Verification approaches were exercised to show that prediction performances obtained from the FPP and B2P algorithms differ within one mis-prediction per thousand instructions using a known framework for branch prediction evaluation. For a chosen FPGA device, circuits generated from the B2P algorithm showed average area savings of over 25% against circuits for the FPP algorithm with similar time performances thus making the proposed blocked predictor superior from a practical viewpoint.
Resumo:
How can a bridge be built between autonomic computing approaches and parallel computing systems? How can autonomic computing approaches be extended towards building reliable systems? How can existing technologies be merged to provide a solution for self-managing systems? The work reported in this paper aims to answer these questions by proposing Swarm-Array Computing, a novel technique inspired from swarm robotics and built on the foundations of autonomic and parallel computing paradigms. Two approaches based on intelligent cores and intelligent agents are proposed to achieve autonomy in parallel computing systems. The feasibility of the proposed approaches is validated on a multi-agent simulator.
Resumo:
The work reported in this paper is motivated by biomimetic inspiration - the transformation of patterns. The major issue addressed is the development of feasible methods for transformation based on a macroscopic tool. The general requirement for the feasibility of the transformation method is determined by classifying pattern formation approaches an their characteristics. A formal definition for pattern transformation is provided and four special cases namely, elementary and geometric transformation based on repositioning all and some robotic agents are introduced. A feasible method for transforming patterns geometrically, based on the macroscopic parameter operation of a swarm is considered. The transformation method is applied to a swarm model which lends itself to the transformation technique. Simulation studies are developed to validate the feasibility of the approach, and do indeed confirm the approach.
Resumo:
Several pixel-based people counting methods have been developed over the years. Among these the product of scale-weighted pixel sums and a linear correlation coefficient is a popular people counting approach. However most approaches have paid little attention to resolving the true background and instead take all foreground pixels into account. With large crowds moving at varying speeds and with the presence of other moving objects such as vehicles this approach is prone to problems. In this paper we present a method which concentrates on determining the true-foreground, i.e. human-image pixels only. To do this we have proposed, implemented and comparatively evaluated a human detection layer to make people counting more robust in the presence of noise and lack of empty background sequences. We show the effect of combining human detection with a pixel-map based algorithm to i) count only human-classified pixels and ii) prevent foreground pixels belonging to humans from being absorbed into the background model. We evaluate the performance of this approach on the PETS 2009 dataset using various configurations of the proposed methods. Our evaluation demonstrates that the basic benchmark method we implemented can achieve an accuracy of up to 87% on sequence ¿S1.L1 13-57 View 001¿ and our proposed approach can achieve up to 82% on sequence ¿S1.L3 14-33 View 001¿ where the crowd stops and the benchmark accuracy falls to 64%.
Resumo:
User interfaces have the primary role of enabling access to information meeting individual users' needs. However, the user-systems interaction is still rigid, especially in support of complex environments where various types of users are involved. Among the approaches for improving user interface agility, we present a normative approach to the design interfaces of web applications, which allow delivering users personalized services according to parameters extracted from the simulation of norms in the social context. A case study in an e-Government context is used to illustrate the implications of the approach.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.
An assessment of aerosol‐cloud interactions in marine stratus clouds based on surface remote sensing
Resumo:
An assessment of aerosol-cloud interactions (ACI) from ground-based remote sensing under coastal stratiform clouds is presented. The assessment utilizes a long-term, high temporal resolution data set from the Atmospheric Radiation Measurement (ARM) Program deployment at Pt. Reyes, California, United States, in 2005 to provide statistically robust measures of ACI and to characterize the variability of the measures based on variability in environmental conditions and observational approaches. The average ACIN (= dlnNd/dlna, the change in cloud drop number concentration with aerosol concentration) is 0.48, within a physically plausible range of 0–1.0. Values vary between 0.18 and 0.69 with dependence on (1) the assumption of constant cloud liquid water path (LWP), (2) the relative value of cloud LWP, (3) methods for retrieving Nd, (4) aerosol size distribution, (5) updraft velocity, and (6) the scale and resolution of observations. The sensitivity of the local, diurnally averaged radiative forcing to this variability in ACIN values, assuming an aerosol perturbation of 500 c-3 relative to a background concentration of 100 cm-3, ranges betwee-4 and -9 W -2. Further characterization of ACI and its variability is required to reduce uncertainties in global radiative forcing estimates.
Resumo:
From birth onwards, the gastrointestinal (GI) tract of infants progressively acquires a complex range of micro-organisms. It is thought that by 2 years of age the GI microbial population has stabilized. Within the developmental period of the infant GI microbiota, weaning is considered to be most critical, as the infant switches from a milk-based diet (breast and/or formula) to a variety of food components. Longitudinal analysis of the biological succession of the infant GI/faecal microbiota is lacking. In this study, faecal samples were obtained regularly from 14 infants from 1 month to 18 months of age. Seven of the infants (including a set of twins) were exclusively breast-fed and seven were exclusively formula-fed prior to weaning, with 175 and 154 faecal samples, respectively, obtained from each group. Diversity and dynamics of the infant faecal microbiota were analysed by using fluorescence in situ hybridization and denaturing gradient gel electrophoresis. Overall, the data demonstrated large inter- and intra-individual differences in the faecal microbiological profiles during the study period. However, the infant faecal microbiota merged with time towards a climax community within and between feeding groups. Data from the twins showed the highest degree of similarity both quantitatively and qualitatively. Inter-individual variation was evident within the infant faecal microbiota and its development, even within exclusively formula-fed infants receiving the same diet. These data can be of help to future clinical trials (e.g. targeted weaning products) to organize protocols and obtain a more accurate outline of the changes and dynamics of the infant GI microbiota.
Resumo:
Increasingly, the microbiological scientific community is relying on molecular biology to define the complexity of the gut flora and to distinguish one organism from the next. This is particularly pertinent in the field of probiotics, and probiotic therapy, where identifying probiotics from the commensal flora is often warranted. Current techniques, including genetic fingerprinting, gene sequencing, oligonucleotide probes and specific primer selection, discriminate closely related bacteria with varying degrees of success. Additional molecular methods being employed to determine the constituents of complex microbiota in this area of research are community analysis, denaturing gradient gel electrophoresis (DGGE)/temperature gradient gel electrophoresis (TGGE), fluorescent in situ hybridisation (FISH) and probe grids. Certain approaches enable specific aetiological agents to be monitored, whereas others allow the effects of dietary intervention on bacterial populations to be studied. Other approaches demonstrate diversity, but may not always enable quantification of the population. At the heart of current molecular methods is sequence information gathered from culturable organisms. However, the diversity and novelty identified when applying these methods to the gut microflora demonstrates how little is known about this ecosystem. Of greater concern is the inherent bias associated with some molecular methods. As we understand more of the complexity and dynamics of this diverse microbiota we will be in a position to develop more robust molecular-based technologies to examine it. In addition to identification of the microbiota and discrimination of probiotic strains from commensal organisms, the future of molecular biology in the field of probiotics and the gut flora will, no doubt, stretch to investigations of functionality and activity of the microflora, and/or specific fractions. The quest will be to demonstrate the roles of probiotic strains in vivo and not simply their presence or absence.