903 resultados para Intercalation from solution
Resumo:
The finite element method (FEM) is now developed to solve two-dimensional Hartree-Fock (HF) equations for atoms and diatomic molecules. The method and its implementation is described and results are presented for the atoms Be, Ne and Ar as well as the diatomic molecules LiH, BH, N_2 and CO as examples. Total energies and eigenvalues calculated with the FEM on the HF-level are compared with results obtained with the numerical standard methods used for the solution of the one dimensional HF equations for atoms and for diatomic molecules with the traditional LCAO quantum chemical methods and the newly developed finite difference method on the HF-level. In general the accuracy increases from the LCAO - to the finite difference - to the finite element method.
Resumo:
The rejection of the European Constitution marks an important crystallization point for debate about the European Union (EU) and the integration process. The European Constitution was envisaged as the founding document of a renewed and enlarged European Union and thus it was rather assumed to find wide public support. Its rejection was not anticipated. The negative referenda in France and the Netherlands therefore led to a controversial debate about the more fundamental meaning and the consequences of the rejection both for the immediate state of affairs as well as for the further integration process. The rejection of the Constitution and the controversy about its correct interpretation therefore present an intriguing puzzle for political analysis. Although the treaty rejection was taken up widely in the field of European Studies, the focus of existing analyses has predominantly been on explaining why the current situation occurred. Underlying these approaches is the premise that by establishing the reasons for the rejection it is possible to derive the ‘true’ meaning of the event for the EU integration process. In my paper I rely on an alternative, discourse theoretical approach which aims to overcome the positivist perspective dominating the existing analyses. I argue that the meaning of the event ‘treaty rejection’ is not fixed or inherent to it but discursively constructed. The critical assessment of this concrete meaning-production is of high relevance as the specific meaning attributed to the treaty rejection effectively constrains the scope for supposedly ‘reasonable’ options for action, both in the concrete situation and in the further European integration process more generally. I will argue that the overall framing suggests a fundamental technocratic approach to governance from part of the Commission. Political struggle and public deliberation is no longer foreseen as the concrete solutions to the citizens’ general concerns are designed by supposedly apolitical experts. Through the communicative diffusion and the active implementation of this particular model of governance the Commission shapes the future integration process in a more substantial way than is obvious from its seemingly limited immediate problem-solving orientation of overcoming the ‘constitutional crisis’. As the European Commission is a central actor in the discourse production my analysis focuses on the specific interpretation of the situation put forward by the Commission. In order to work out the Commission’s particular take on the event I conducted a frame analysis (according to Benford/Snow) on a body of key sources produced in the context of coping with the treaty rejection.
Resumo:
This thesis presents a perceptual system for a humanoid robot that integrates abilities such as object localization and recognition with the deeper developmental machinery required to forge those competences out of raw physical experiences. It shows that a robotic platform can build up and maintain a system for object localization, segmentation, and recognition, starting from very little. What the robot starts with is a direct solution to achieving figure/ground separation: it simply 'pokes around' in a region of visual ambiguity and watches what happens. If the arm passes through an area, that area is recognized as free space. If the arm collides with an object, causing it to move, the robot can use that motion to segment the object from the background. Once the robot can acquire reliable segmented views of objects, it learns from them, and from then on recognizes and segments those objects without further contact. Both low-level and high-level visual features can also be learned in this way, and examples are presented for both: orientation detection and affordance recognition, respectively. The motivation for this work is simple. Training on large corpora of annotated real-world data has proven crucial for creating robust solutions to perceptual problems such as speech recognition and face detection. But the powerful tools used during training of such systems are typically stripped away at deployment. Ideally they should remain, particularly for unstable tasks such as object detection, where the set of objects needed in a task tomorrow might be different from the set of objects needed today. The key limiting factor is access to training data, but as this thesis shows, that need not be a problem on a robotic platform that can actively probe its environment, and carry out experiments to resolve ambiguity. This work is an instance of a general approach to learning a new perceptual judgment: find special situations in which the perceptual judgment is easy and study these situations to find correlated features that can be observed more generally.
Resumo:
This paper investigates the linear degeneracies of projective structure estimation from point and line features across three views. We show that the rank of the linear system of equations for recovering the trilinear tensor of three views reduces to 23 (instead of 26) in the case when the scene is a Linear Line Complex (set of lines in space intersecting at a common line) and is 21 when the scene is planar. The LLC situation is only linearly degenerate, and we show that one can obtain a unique solution when the admissibility constraints of the tensor are accounted for. The line configuration described by an LLC, rather than being some obscure case, is in fact quite typical. It includes, as a particular example, the case of a camera moving down a hallway in an office environment or down an urban street. Furthermore, an LLC situation may occur as an artifact such as in direct estimation from spatio-temporal derivatives of image brightness. Therefore, an investigation into degeneracies and their remedy is important also in practice.
Resumo:
We study the relation between support vector machines (SVMs) for regression (SVMR) and SVM for classification (SVMC). We show that for a given SVMC solution there exists a SVMR solution which is equivalent for a certain choice of the parameters. In particular our result is that for $epsilon$ sufficiently close to one, the optimal hyperplane and threshold for the SVMC problem with regularization parameter C_c are equal to (1-epsilon)^{- 1} times the optimal hyperplane and threshold for SVMR with regularization parameter C_r = (1-epsilon)C_c. A direct consequence of this result is that SVMC can be seen as a special case of SVMR.
Resumo:
Uniformly distributed ZnO nanorods with diameter 70-100 nm and 1-2μm long have been successfully grown at low temperatures on GaN by using the inexpensive aqueous solution method. The formation of the ZnO nanorods and the growth parameters are controlled by reactant concentration, temperature and pH. No catalyst is required. The XRD studies show that the ZnO nanorods are single crystals and that they grow along the c axis of the crystal plane. The room temperature photoluminescence measurements have shown ultraviolet peaks at 388nm with high intensity, which are comparable to those found in high quality ZnO films. The mechanism of the nanorod growth in the aqueous solution is proposed. The dependence of the ZnO nanorods on the growth parameters was also investigated. While changing the growth temperature from 60°C to 150°C, the morphology of the ZnO nanorods changed from sharp tip (needle shape) to flat tip (rod shape). These kinds of structure are useful in laser and field emission application.
Resumo:
Uniformly distributed ZnO nanorods with diameter 80-120 nm and 1-2µm long have been successfully grown at low temperatures on GaN by using the inexpensive aqueous solution method. The formation of the ZnO nanorods and the growth parameters are controlled by reactant concentration, temperature and pH. No catalyst is required. The XRD studies show that the ZnO nanorods are single crystals and that they grow along the c axis of the crystal plane. The room temperature photoluminescence measurements have shown ultraviolet peaks at 388nm with high intensity, which are comparable to those found in high quality ZnO films. The mechanism of the nanorod growth in the aqueous solution is proposed. The dependence of the ZnO nanorods on the growth parameters was also investigated. While changing the growth temperature from 60°C to 150°C, the morphology of the ZnO nanorods changed from sharp tip with high aspect ratio to flat tip with smaller aspect ratio. These kinds of structure are useful in laser and field emission application.
Resumo:
Many online services access a large number of autonomous data sources and at the same time need to meet different user requirements. It is essential for these services to achieve semantic interoperability among these information exchange entities. In the presence of an increasing number of proprietary business processes, heterogeneous data standards, and diverse user requirements, it is critical that the services are implemented using adaptable, extensible, and scalable technology. The COntext INterchange (COIN) approach, inspired by similar goals of the Semantic Web, provides a robust solution. In this paper, we describe how COIN can be used to implement dynamic online services where semantic differences are reconciled on the fly. We show that COIN is flexible and scalable by comparing it with several conventional approaches. With a given ontology, the number of conversions in COIN is quadratic to the semantic aspect that has the largest number of distinctions. These semantic aspects are modeled as modifiers in a conceptual ontology; in most cases the number of conversions is linear with the number of modifiers, which is significantly smaller than traditional hard-wiring middleware approach where the number of conversion programs is quadratic to the number of sources and data receivers. In the example scenario in the paper, the COIN approach needs only 5 conversions to be defined while traditional approaches require 20,000 to 100 million. COIN achieves this scalability by automatically composing all the comprehensive conversions from a small number of declaratively defined sub-conversions.
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By an essential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur in many compositional situations, such as household budget patterns, time budgets, palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful in such situations. From consideration of such examples it seems sensible to build up a model in two stages, the first determining where the zeros will occur and the second how the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
In this paper we present a novel structure from motion (SfM) approach able to infer 3D deformable models from uncalibrated stereo images. Using a stereo setup dramatically improves the 3D model estimation when the observed 3D shape is mostly deforming without undergoing strong rigid motion. Our approach first calibrates the stereo system automatically and then computes a single metric rigid structure for each frame. Afterwards, these 3D shapes are aligned to a reference view using a RANSAC method in order to compute the mean shape of the object and to select the subset of points on the object which have remained rigid throughout the sequence without deforming. The selected rigid points are then used to compute frame-wise shape registration and to extract the motion parameters robustly from frame to frame. Finally, all this information is used in a global optimization stage with bundle adjustment which allows to refine the frame-wise initial solution and also to recover the non-rigid 3D model. We show results on synthetic and real data that prove the performance of the proposed method even when there is no rigid motion in the original sequence
Resumo:
The registration of full 3-D models is an important task in computer vision. Range finders only reconstruct a partial view of the object. Many authors have proposed several techniques to register 3D surfaces from multiple views in which there are basically two aspects to consider. First, poor registration in which some sort of correspondences are established. Second, accurate registration in order to obtain a better solution. A survey of the most common techniques is presented and includes experimental results of some of them
Resumo:
This paper presents a complete solution for creating accurate 3D textured models from monocular video sequences. The methods are developed within the framework of sequential structure from motion, where a 3D model of the environment is maintained and updated as new visual information becomes available. The camera position is recovered by directly associating the 3D scene model with local image observations. Compared to standard structure from motion techniques, this approach decreases the error accumulation while increasing the robustness to scene occlusions and feature association failures. The obtained 3D information is used to generate high quality, composite visual maps of the scene (mosaics). The visual maps are used to create texture-mapped, realistic views of the scene
Resumo:
Title: Data-Driven Text Generation using Neural Networks Speaker: Pavlos Vougiouklis, University of Southampton Abstract: Recent work on neural networks shows their great potential at tackling a wide variety of Natural Language Processing (NLP) tasks. This talk will focus on the Natural Language Generation (NLG) problem and, more specifically, on the extend to which neural network language models could be employed for context-sensitive and data-driven text generation. In addition, a neural network architecture for response generation in social media along with the training methods that enable it to capture contextual information and effectively participate in public conversations will be discussed. Speaker Bio: Pavlos Vougiouklis obtained his 5-year Diploma in Electrical and Computer Engineering from the Aristotle University of Thessaloniki in 2013. He was awarded an MSc degree in Software Engineering from the University of Southampton in 2014. In 2015, he joined the Web and Internet Science (WAIS) research group of the University of Southampton and he is currently working towards the acquisition of his PhD degree in the field of Neural Network Approaches for Natural Language Processing. Title: Provenance is Complicated and Boring — Is there a solution? Speaker: Darren Richardson, University of Southampton Abstract: Paper trails, auditing, and accountability — arguably not the sexiest terms in computer science. But then you discover that you've possibly been eating horse-meat, and the importance of provenance becomes almost palpable. Having accepted that we should be creating provenance-enabled systems, the challenge of then communicating that provenance to casual users is not trivial: users should not have to have a detailed working knowledge of your system, and they certainly shouldn't be expected to understand the data model. So how, then, do you give users an insight into the provenance, without having to build a bespoke system for each and every different provenance installation? Speaker Bio: Darren is a final year Computer Science PhD student. He completed his undergraduate degree in Electronic Engineering at Southampton in 2012.
Resumo:
In the present work the toxic activity of extracts of Eupatorium microphyllum L.F. was evaluated on 4th instar larvae of the mosquito Aedes aegypti (Linneaus), under laboratory conditions. Aqueous extracts were utilized in concentrations of 500 mg L-1, 1,500 mg L-1 and 2,500 mg L-1 and acetone in concentrations of 10 mg L-1, 20 mg L-1, 30 mg L-1, 40 mg L-1and 50 mg L-1. The bioassays were carried out for triplicate each one with 20 larvae, exposed for 24 hours to 150 mL of solution. In all the bioassays were employed control groups. In the evaluation of the acetone extracts, a negative control was employed to avoid that the mortality of the larvae to occur on account of the solvent. The Aqueous extracts showed low moderate action in the mortality of larvae, less than 20%. On the contrary, the action of the acetone extracts was observed to 10 and 20 mg L-1with 15% of mortality, while to 30 and 40 mg L-1 were registered 22 to 38% of mortality. However, to 50 mg L-1 the mortality was of 95.4% with highly significant statistical results. The concentrations of the acetone extracts showed to be the most efficient for the control of the mosquitoes selected. Both types of extracts showed toxic effect in larvae of A. aegypti, nevertheless, greater effect in the acetone extracts was observed relating to the aqueous extracts of E. microphyllum, which constitutes a viable alternative in the search of new larvicides from composed natural.
Resumo:
En aquesta tesi s'ha caracteritzat la ruta d'internalització de l'onconasa, una RNasa citotòxica. Els resultats indiquen que l'onconasa entra a les cèl·lules per la via dependent de clatrina i del complex AP-2. Seguidament es dirigeix als endosomes de reciclatge i es a través d'aquesta ruta que la proteïna exerceix la citotoxicitat. Per altra banda, els resultats d'aquest treball demostren que PE5, una variant citotòxica de la ribonucleasa pancreàtica humana (HP-RNasa), interacciona amb la importina mitjançant diferents residus que tot i que no són seqüencials, es troben propers en l'estructura tridimensional d'aquesta proteïna. PM8 és una HP-RNasa amb estructura cristal·logràfica dimèrica constituïda per intercanvi de dominis N-terminals. En aquesta tesi s'han establert les condicions per estabilitzar aquest dimer en solució i també es proposa un mecanisme per la dimerització.