982 resultados para canonical redundancy analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction – Based on a previous project of University of Lisbon (UL) – a Bibliometric Benchmarking Analysis of University of Lisbon, for the period of 2000-2009 – a database was created to support research information (ULSR). However this system was not integrated with other existing systems at University, as the UL Libraries Integrated System (SIBUL) and the Repository of University of Lisbon (Repositório.UL). Since libraries were called to be part of the process, the Faculty of Pharmacy Library’ team felt that it was very important to get all systems connected or, at least, to use that data in the library systems. Objectives – The main goals were to centralize all the scientific research produced at Faculty of Pharmacy, made it available to the entire Faculty, involve researchers and library team, capitalize and reinforce team work with the integration of several distinct projects and reducing tasks’ redundancy. Methods – Our basis was the imported data collection from the ISI Web of Science (WoS), for the period of 2000-2009, into ULSR. All the researchers and indexed publications at WoS, were identified. A first validation to identify all the researchers and their affiliation (university, faculty, department and unit) was done. The final validation was done by each researcher. In a second round, concerning the same period, all Pharmacy Faculty researchers identified their published scientific work in other databases/resources (NOT WoS). To our strategy, it was important to get all the references and essential/critical to relate them with the correspondent digital objects. To each researcher previously identified, was requested to register all their references of the ‘NOT WoS’ published works, at ULSR. At the same time, they should submit all PDF files (for both WoS and NOT WoS works) in a personal area of the Web server. This effort enabled us to do a more reliable validation and prepare the data and metadata to be imported to Repository and to Library Catalogue. Results – 558 documents related with 122 researchers, were added into ULSR. 1378 bibliographic records (WoS + NOT WoS) were converted into UNIMARC and Dublin Core formats. All records were integrated in the catalogue and repository. Conclusions – Although different strategies could be adopted, according to each library team, we intend to share this experience and give some tips of what could be done and how Faculty of Pharmacy created and implemented her strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We estimate the 'fundamental' component of euro area sovereign bond yield spreads, i.e. the part of bond spreads that can be justified by country-specific economic factors, euro area economic fundamentals, and international influences. The yield spread decomposition is achieved using a multi-market, no-arbitrage affine term structure model with a unique pricing kernel. More specifically, we use the canonical representation proposed by Joslin, Singleton, and Zhu (2011) and introduce next to standard spanned factors a set of unspanned macro factors, as in Joslin, Priebsch, and Singleton (2013). The model is applied to yield curve data from Belgium, France, Germany, Italy, and Spain over the period 2005-2013. Overall, our results show that economic fundamentals are the dominant drivers behind sovereign bond spreads. Nevertheless, shocks unrelated to the fundamental component of the spread have played an important role in the dynamics of bond spreads since the intensification of the sovereign debt crisis in the summer of 2011

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a detailed analysis of adsorption of supercritical fluids on nonporous graphitized thermal carbon black. Two methods are employed in the analysis. One is the molecular layer structure theory (MLST), proposed recently by our group, and the other is the grand canonical Monte Carlo (GCMC) simulation. They were applied to describe the adsorption of argon, krypton, methane, ethylene, and sulfur hexafluoride on graphitized thermal carbon black. It was found that the MLST describes all the experimental data at various temperatures well. Results from GCMC simulations describe well the data at low pressure but show some deviations at higher pressures for all the adsorbates tested. The question of negative surface excess is also discussed in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we investigate the suitability of the grand canonical Monte Carlo in the description of adsorption equilibria of flexible n-alkane (butane, pentane and hexane) on graphitized thermal carbon black. Potential model of n-alkane of Martin and Siepmann (J. Phys. Chem. 102 (1998) 2569) is employed in the simulation, and we consider the flexibility of molecule in the simulation. By this we study two models, one is the fully flexible molecular model in which n-alkane is subject to bending and torsion, while the other is the rigid molecular model in which all carbon atoms reside on the same plane. It is found that (i) the adsorption isotherm results of these two models are close to each other, suggesting that n-alkane model behaves mostly as rigid molecules with respect to adsorption although the isotherm for longer chain n-hexane is better described by the flexible molecular model (ii) the isotherms agree very well with the experimental data at least up to two layers on the surface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most magnetic resonance imaging (MRI) spatial encoding techniques employ low-frequency pulsed magnetic field gradients that undesirably induce multiexponentially decaying eddy currents in nearby conducting structures of the MRI system. The eddy currents degrade the switching performance of the gradient system, distort the MRI image, and introduce thermal loads in the cryostat vessel and superconducting MRI components. Heating of superconducting magnets due to induced eddy currents is particularly problematic as it offsets the superconducting operating point, which can cause a system quench. A numerical characterization of transient eddy current effects is vital for their compensation/control and further advancement of the MRI technology as a whole. However, transient eddy current calculations are particularly computationally intensive. In large-scale problems, such as gradient switching in MRI, conventional finite-element method (FEM)-based routines impose very large computational loads during generation/solving of the system equations. Therefore, other computational alternatives need to be explored. This paper outlines a three-dimensional finite-difference time-domain (FDTD) method in cylindrical coordinates for the modeling of low-frequency transient eddy currents in MRI, as an extension to the recently proposed time-harmonic scheme. The weakly coupled Maxwell's equations are adapted to the low-frequency regime by downscaling the speed of light constant, which permits the use of larger FDTD time steps while maintaining the validity of the Courant-Friedrich-Levy stability condition. The principal hypothesis of this work is that the modified FDTD routine can be employed to analyze pulsed-gradient-induced, transient eddy currents in superconducting MRI system models. The hypothesis is supported through a verification of the numerical scheme on a canonical problem and by analyzing undesired temporal eddy current effects such as the B-0-shift caused by actively shielded symmetric/asymmetric transverse x-gradient head and unshielded z-gradient whole-body coils operating in proximity to a superconducting MRI magnet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The kinematic mapping of a rigid open-link manipulator is a homomorphism between Lie groups. The homomorphisrn has solution groups that act on an inverse kinematic solution element. A canonical representation of solution group operators that act on a solution element of three and seven degree-of-freedom (do!) dextrous manipulators is determined by geometric analysis. Seven canonical solution groups are determined for the seven do! Robotics Research K-1207 and Hollerbach arms. The solution element of a dextrous manipulator is a collection of trivial fibre bundles with solution fibres homotopic to the Torus. If fibre solutions are parameterised by a scalar, a direct inverse funct.ion that maps the scalar and Cartesian base space coordinates to solution element fibre coordinates may be defined. A direct inverse pararneterisation of a solution element may be approximated by a local linear map generated by an inverse augmented Jacobian correction of a linear interpolation. The action of canonical solution group operators on a local linear approximation of the solution element of inverse kinematics of dextrous manipulators generates cyclical solutions. The solution representation is proposed as a model of inverse kinematic transformations in primate nervous systems. Simultaneous calibration of a composition of stereo-camera and manipulator kinematic models is under-determined by equi-output parameter groups in the composition of stereo-camera and Denavit Hartenberg (DH) rnodels. An error measure for simultaneous calibration of a composition of models is derived and parameter subsets with no equi-output groups are determined by numerical experiments to simultaneously calibrate the composition of homogeneous or pan-tilt stereo-camera with DH models. For acceleration of exact Newton second-order re-calibration of DH parameters after a sequential calibration of stereo-camera and DH parameters, an optimal numerical evaluation of DH matrix first order and second order error derivatives with respect to a re-calibration error function is derived, implemented and tested. A distributed object environment for point and click image-based tele-command of manipulators and stereo-cameras is specified and implemented that supports rapid prototyping of numerical experiments in distributed system control. The environment is validated by a hierarchical k-fold cross validated calibration to Cartesian space of a radial basis function regression correction of an affine stereo model. Basic design and performance requirements are defined for scalable virtual micro-kernels that broker inter-Java-virtual-machine remote method invocations between components of secure manageable fault-tolerant open distributed agile Total Quality Managed ISO 9000+ conformant Just in Time manufacturing systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Amino acid substitution plays a vital role in both the molecular engineering of proteins and analysis of structure-activity relationships. High-throughput substitution is achieved by codon randomisation, which generates a library of mutants (a randomised gene library) in a single experiment. For full randomisation, key codons are typically replaced with NNN (64 sequences) or NNG CorT (32 sequences). This obligates cloning of redundant codons alongside those required to encode the 20 amino acids. As the number of randomised codons increases, there is therefore a progressive loss of randomisation efficiency; the number of genes required per protein rises exponentially. The redundant codons cause amino acids to be represented unevenly; for example, methionine is encoded just once within NNN, whilst arginine is encoded six times. Finally, the organisation of the genetic code makes it impossible to encode functional subsets of amino acids (e.g. polar residues only) in a single experiment. Here, we present a novel solution to randomisation where genetic redundancy is eliminated; the number of different genes equals the number of encoded proteins, regardless of codon number. There is no inherent amino acid bias and any required subset of amino acids may be encoded in one experiment. This generic approach should be widely applicable in studies involving randomisation of proteins. © 2003 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sparse code division multiple access (CDMA), a variation on the standard CDMA method in which the spreading (signature) matrix contains only a relatively small number of nonzero elements, is presented and analysed using methods of statistical physics. The analysis provides results on the performance of maximum likelihood decoding for sparse spreading codes in the large system limit. We present results for both cases of regular and irregular spreading matrices for the binary additive white Gaussian noise channel (BIAWGN) with a comparison to the canonical (dense) random spreading code. © 2007 IOP Publishing Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis discusses and assesses the resources available to Asian entrepreneurs in the West Midlands' clothing industry and how they are used by these small businessmen in order to address opportunities in the market economy within the constraints imposed. The fashion industry is volatile and is dependent upon flexible firms which can respond quickly to shortrun production schedules. Small firms are best able to respond to this market environment. Production of jeans presents an interesting departure from the mainstream fashion industry. It is traditionally gared towards longrun production schedules where multinational enterprises have artificially diversified the market, promoting the 'right' brand name and have established control of the upper end of the market, whilst imports from Newly Developing Countries have catered for cheap copies at the lower end of the market. In recent years, a fashion element to jeans has emerged, thus opening a market gap for U.K. manufacturers to respond in the same way as for other fashion articles. A large immigrant population, previously serving the now declining factories and foundries of the West Midlands but, through redundancy, no longer a part of this employment sector, has ~5ponded to economic constraints and market opportunities by drawing on ethnic network resources for competitive access to labour, finance and contacts, to attack the emergent market gap. Two models of these Asian entrepreneurs are developed. One being somecne who has professionally and actively tackled the market gap and become established. These entrepreneurs are usually educated and have personal experience in business and were amongst the first to perceive opportunities to enter the industry, actively utilising their ethnicity as a resource upon which to draw for favorable access to cheap, flexible labour and capital. The second model is composed of later entrants to jeans manufacturing. They have less formal education and experience and have been pushed into self-employment by constraints of unemployment. Their ethnicity is passively used as a resource. They are more likely confined to the marginal activity of 'cut make and trim' and have little opportunity to increase profit margins, become estalished or expand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Battery energy storage systems have traditionally been manufactured using new batteries with a good reliability. The high cost of such a system has led to investigations of using second life transportation batteries to provide an alternative energy storage capability. However, the reliability and performance of these batteries is unclear and multi-modular power electronics with redundancy have been suggested as a means of helping with this issue. This paper reviews work already undertaken on battery failure rate to suggest suitable figures for use in reliability calculations. The paper then uses reliability analysis and a numerical example to investigate six different multi-modular topologies and suggests how the number of series battery strings and power electronic module redundancy should be determined for the lowest hardware cost using a numerical example. The results reveal that the cascaded dc-side modular with single inverter is the lowest cost solution for a range of battery failure rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aberrant behavior of biological signaling pathways has been implicated in diseases such as cancers. Therapies have been developed to target proteins in these networks in the hope of curing the illness or bringing about remission. However, identifying targets for drug inhibition that exhibit good therapeutic index has proven to be challenging since signaling pathways have a large number of components and many interconnections such as feedback, crosstalk, and divergence. Unfortunately, some characteristics of these pathways such as redundancy, feedback, and drug resistance reduce the efficacy of single drug target therapy and necessitate the employment of more than one drug to target multiple nodes in the system. However, choosing multiple targets with high therapeutic index poses more challenges since the combinatorial search space could be huge. To cope with the complexity of these systems, computational tools such as ordinary differential equations have been used to successfully model some of these pathways. Regrettably, for building these models, experimentally-measured initial concentrations of the components and rates of reactions are needed which are difficult to obtain, and in very large networks, they may not be available at the moment. Fortunately, there exist other modeling tools, though not as powerful as ordinary differential equations, which do not need the rates and initial conditions to model signaling pathways. Petri net and graph theory are among these tools. In this thesis, we introduce a methodology based on Petri net siphon analysis and graph network centrality measures for identifying prospective targets for single and multiple drug therapies. In this methodology, first, potential targets are identified in the Petri net model of a signaling pathway using siphon analysis. Then, the graph-theoretic centrality measures are employed to prioritize the candidate targets. Also, an algorithm is developed to check whether the candidate targets are able to disable the intended outputs in the graph model of the system or not. We implement structural and dynamical models of ErbB1-Ras-MAPK pathways and use them to assess and evaluate this methodology. The identified drug-targets, single and multiple, correspond to clinically relevant drugs. Overall, the results suggest that this methodology, using siphons and centrality measures, shows promise in identifying and ranking drugs. Since this methodology only uses the structural information of the signaling pathways and does not need initial conditions and dynamical rates, it can be utilized in larger networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ecological network analysis was applied in the Seine estuary ecosystem, northern France, integrating ecological data from the years 1996 to 2002. The Ecopath with Ecosim (EwE) approach was used to model the trophic flows in 6 spatial compartments leading to 6 distinct EwE models: the navigation channel and the two channel flanks in the estuary proper, and 3 marine habitats in the eastern Seine Bay. Each model included 12 consumer groups, 2 primary producers, and one detritus group. Ecological network analysis was performed, including a set of indices, keystoneness, and trophic spectrum analysis to describe the contribution of the 6 habitats to the Seine estuary ecosystem functioning. Results showed that the two habitats with a functioning most related to a stressed state were the northern and central navigation channels, where building works and constant maritime traffic are considered major anthropogenic stressors. The strong top-down control highlighted in the other 4 habitats was not present in the central channel, showing instead (i) a change in keystone roles in the ecosystem towards sediment-based, lower trophic levels, and (ii) a higher system omnivory. The southern channel evidenced the highest system activity (total system throughput), the higher trophic specialisation (low system omnivory), and the lowest indication of stress (low cycling and relative redundancy). Marine habitats showed higher fish biomass proportions and higher transfer efficiencies per trophic levels than the estuarine habitats, with a transition area between the two that presented intermediate ecosystem structure. The modelling of separate habitats permitted disclosing each one's response to the different pressures, based on their a priori knowledge. Network indices, although non-monotonously, responded to these differences and seem a promising operational tool to define the ecological status of transitional water ecosystems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Insights into the genomic adaptive traits of Treponema pallidum, the causative bacterium of syphilis, have long been hampered due to the absence of in vitro culture models and the constraints associated with its propagation in rabbits. Here, we have bypassed the culture bottleneck by means of a targeted strategy never applied to uncultivable bacterial human pathogens to directly capture whole-genome T. pallidum data in the context of human infection. This strategy has unveiled a scenario of discreet T. pallidum interstrain single-nucleotide-polymorphism-based microevolution, contrasting with a rampant within-patient genetic heterogeneity mainly targeting multiple phase-variable loci and a major antigen-coding gene (tprK). TprK demonstrated remarkable variability and redundancy, intra- and interpatient, suggesting ongoing parallel adaptive diversification during human infection. Some bacterial functions (for example, flagella- and chemotaxis-associated) were systematically targeted by both inter- and intrastrain single nucleotide polymorphisms, as well as by ongoing within-patient phase variation events. Finally, patient-derived genomes possess mutations targeting a penicillin-binding protein coding gene (mrcA) that had never been reported, unveiling it as a candidate target to investigate the impact on the susceptibility to penicillin. Our findings decode the major genetic mechanisms by which T. pallidum promotes immune evasion and survival, and demonstrate the exceptional power of characterizing evolving pathogen subpopulations during human infection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuroaesthetics is the study of the brain’s response to artistic stimuli. The neuroscientist V.S. Ramachandran contends that art is primarily “caricature” or “exaggeration.” Exaggerated forms hyperactivate neurons in viewers’ brains, which in turn produce specific, “universal” responses. Ramachandran identifies a precursor for his theory in the concept of rasa (literally “juice”) from classical Hindu aesthetics, which he associates with “exaggeration.” The canonical Sanskrit texts of Bharata Muni’s Natya Shastra and Abhinavagupta’s Abhinavabharati, however, do not support Ramachandran’s conclusions. They present audiences as dynamic co-creators, not passive recipients. I believe we could more accurately model the neurology of Hindu aesthetic experiences if we took indigenous rasa theory more seriously as qualitative data that could inform future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les théories poétiques de l’inscape et du sprung rhythm établies par le poète britannique Gerard Manley Hopkins (1844-1889) ont dérouté les critiques des années durant. La plupart d’entre eux se sont appuyés sur les poèmes publiés en quête d’indices quant à la signification de ses théories. Cette thèse approfondit l’analyse mise de l’avant en révélant que la genèse de la théorie de l’inscape provient des notes de Hopkins — alors étudiant de premier cycle — sur le philosophe présocratique Parménide, et est influencée par les commentaires sur l’oeuvre De la nature du philosophe. Un examen des lettres de Hopkins à ses collègues poètes Robert Bridges et Richard Watson Dixon révèle que le sprung rhythm découle de l’inscape, sa théorie de base. La technique du sprung rhythm consiste donc en l’application de l’inscape au schéma métrique de la poésie. Cette étude établit d’abord une définition opérationnelle de chacune de ces théories pour ensuite les appliquer aux manuscrits afin de déterminer dans quelle mesure Hopkins y adhérait et les exploitait lors de la rédaction de deux de ses poèmes canoniques, God’s Grandeur et The Windhover. L’étude s’inscrit ainsi dans le champ de la critique génétique, une approche mise au point en France, particulièrement à l’Institut des textes et manuscrits modernes (ITEM). Ce sont donc sur des oeuvres littéraires françaises ou sur des textes en prose qu’ont porté la majorité des analyses à ce sujet. Suppressions, ajouts, substitutions et constantes entre différentes versions témoignent de ce qu’étaient les priorités de Hopkins dans sa quête pour atteindre l’effet désiré. Par conséquent, cette thèse s’efforce de dévoiler la signification des théories poétiques de Hopkins en établissant leur genèse et leur application respectives dans deux de ses poèmes selon une perspective de critique génétique. Elle contribue également à enrichir la critique génétique en l’appliquant à des oeuvres littéraires écrites en anglais et sous forme de poésie plutôt que de prose. Enfin, son objectif ultime est de raviver l’intérêt pour le poète Hopkins en tant que sujet viable d’étude, et de favoriser l’appréciation de ses prouesses tant comme théoricien poétique que comme poète.