928 resultados para proof-of-concept
Resumo:
Computer games are significant since they embody our youngsters’ engagement with contemporary culture, including both play and education. These games rely heavily on visuals, systems of sign and expression based on concepts and principles of Art and Architecture. We are researching a new genre of computer games, ‘Educational Immersive Environments’ (EIEs) to provide educational materials suitable for the school classroom. Close collaboration with subject teachers is necessary, but we feel a specific need to engage with the practicing artist, the art theoretician and historian. Our EIEs are loaded with multimedia (but especially visual) signs which act to direct the learner and provide the ‘game-play’ experience forming semiotic systems. We suggest the hypothesis that computer games are a space of deconstruction and reconstruction (DeRe): When players enter the game their physical world and their culture is torn apart; they move in a semiotic system which serves to reconstruct an alternate reality where disbelief is suspended. The semiotic system draws heavily on visuals which direct the players’ interactions and produce motivating gameplay. These can establish a reconstructed culture and emerging game narrative. We have recently tested our hypothesis and have used this in developing design principles for computer game designers. Yet there are outstanding issues concerning the nature of the visuals used in computer games, and so questions for contemporary artists. Currently, the computer game industry employs artists in a ‘classical’ role in production of concept sketches, storyboards and 3D content. But this is based on a specification from the client which restricts the artist in intellectual freedom. Our DeRe hypothesis places the artist at the generative centre, to inform the game designer how art may inform our DeRe semiotic spaces. This must of course begin with the artists’ understanding of DeRe in this time when our ‘identities are becoming increasingly fractured, networked, virtualized and distributed’ We hope to persuade artists to engage with the medium of computer game technology to explore these issues. In particular, we pose several questions to the artist: (i) How can particular ‘periods’ in art history be used to inform the design of computer games? (ii) How can specific artistic elements or devices be used to design ‘signs’ to guide the player through the game? (iii) How can visual material be integrated with other semiotic strata such as text and audio?
Resumo:
Given the need of a growing internationalization of business, to have a good command of English is, most of the times important for the development of technical (specific) competences. It is, thus, critical that professionals use accurate terminology to set grounds for a well-succeeded communication. Furthermore, business communication is increasingly moving to ICT-mediated sets and professionals have to be able to promptly adjust to these needs, resorting to trustworthy online information sources, but also using technologies that better serve their business purposes. In this scenario, the main objective of this study is to find evidence as to the utility of concept mapping as a teaching and learning strategy for the appropriation of business English terminology, enabling students to use English more efficiently as language of communication in business context. This study was based on a case study methodology, mainly of exploratory nature. Participants were students (n= 30) enrolled in the subject English Applied to Management II at Águeda School of Technology and Management – University of Aveiro (2013/14 edition). They were asked to create and peer review two concept maps (cmaps), one individually and another in pairs. The data gathered were treated and analysed resorting qualitative (content analysis) and to quantitative (descriptive statistical analysis) techniques. Results of the data analysis unveil that the use of a collaborative concept mapping tool promotes the development of linguistic competences as to the use of business terminology, but also of communication and collaboration competences. Besides, it was also a very important motivation element in the students’ engagement with the subject content.
Resumo:
Kinematic structure of planar mechanisms addresses the study of attributes determined exclusively by the joining pattern among the links forming a mechanism. The system group classification is central to the kinematic structure and consists of determining a sequence of kinematically and statically independent-simple chains which represent a modular basis for the kinematics and force analysis of the mechanism. This article presents a novel graph-based algorithm for structural analysis of planar mechanisms with closed-loop kinematic structure which determines a sequence of modules (Assur groups) representing the topology of the mechanism. The computational complexity analysis and proof of correctness of the implemented algorithm are provided. A case study is presented to illustrate the results of the devised method.
Resumo:
Every day, we shift among various states of sleep and arousal to meet the many demands of our bodies and environment. A central puzzle in neurobiology is how the brain controls these behavioral states, which are essential to an animal's well-being and survival. Mammalian models have predominated sleep and arousal research, although in the past decade, invertebrate models have made significant contributions to our understanding of the genetic underpinnings of behavioral states. More recently, the zebrafish (Danio rerio), a diurnal vertebrate, has emerged as a promising model system for sleep and arousal research.
In this thesis, I describe two studies on sleep/arousal pathways that I conducted using zebrafish, and I discuss how the findings can be combined in future projects to advance our understanding of vertebrate sleep/arousal pathways. In the first study, I discovered a neuropeptide that regulates zebrafish sleep and arousal as a result of a large-scale effort to identify molecules that regulate behavioral states. Taking advantage of facile zebrafish genetics, I constructed mutants for the three known receptors of this peptide and identified the one receptor that exclusively mediates the observed behavioral effects. I further show that the peptide exerts its behavioral effects independently of signaling at a key module of a neuroendocrine signaling pathway. This finding contradicts the hypothesis put forth in mammalian systems that the peptide acts through the classical neuroendocrine pathway; our data further generate new testable hypotheses for determining the central nervous system or alternative neuroendocrine pathways involved.
Second, I will present the development of a chemigenetic method to non-invasively manipulate neurons in the behaving zebrafish. I validated this technique by expressing and inducing the chemigenetic tool in a restricted population of sleep-regulating neurons in the zebrafish. As predicted by established models of this vertebrate sleep regulator, chemigenetic activation of these neurons induced hyperactivity, whereas chemigenetic ablation of these neurons induced increased sleep behavior. Given that light is a potent modulator of behavior in zebrafish, our proof-of-principle data provide a springboard for future studies of sleep/arousal and other light-dependent behaviors to interrogate genetically-defined populations of neurons independently of optogenetic tools.
Resumo:
The overarching theme of this thesis is mesoscale optical and optoelectronic design of photovoltaic and photoelectrochemical devices. In a photovoltaic device, light absorption and charge carrier transport are coupled together on the mesoscale, and in a photoelectrochemical device, light absorption, charge carrier transport, catalysis, and solution species transport are all coupled together on the mesoscale. The work discussed herein demonstrates that simulation-based mesoscale optical and optoelectronic modeling can lead to detailed understanding of the operation and performance of these complex mesostructured devices, serve as a powerful tool for device optimization, and efficiently guide device design and experimental fabrication efforts. In-depth studies of two mesoscale wire-based device designs illustrate these principles—(i) an optoelectronic study of a tandem Si|WO3 microwire photoelectrochemical device, and (ii) an optical study of III-V nanowire arrays.
The study of the monolithic, tandem, Si|WO3 microwire photoelectrochemical device begins with development and validation of an optoelectronic model with experiment. This study capitalizes on synergy between experiment and simulation to demonstrate the model’s predictive power for extractable device voltage and light-limited current density. The developed model is then used to understand the limiting factors of the device and optimize its optoelectronic performance. The results of this work reveal that high fidelity modeling can facilitate unequivocal identification of limiting phenomena, such as parasitic absorption via excitation of a surface plasmon-polariton mode, and quick design optimization, achieving over a 300% enhancement in optoelectronic performance over a nominal design for this device architecture, which would be time-consuming and challenging to do via experiment.
The work on III-V nanowire arrays also starts as a collaboration of experiment and simulation aimed at gaining understanding of unprecedented, experimentally observed absorption enhancements in sparse arrays of vertically-oriented GaAs nanowires. To explain this resonant absorption in periodic arrays of high index semiconductor nanowires, a unified framework that combines a leaky waveguide theory perspective and that of photonic crystals supporting Bloch modes is developed in the context of silicon, using both analytic theory and electromagnetic simulations. This detailed theoretical understanding is then applied to a simulation-based optimization of light absorption in sparse arrays of GaAs nanowires. Near-unity absorption in sparse, 5% fill fraction arrays is demonstrated via tapering of nanowires and multiple wire radii in a single array. Finally, experimental efforts are presented towards fabrication of the optimized array geometries. A hybrid self-catalyzed and selective area MOCVD growth method is used to establish morphology control of GaP nanowire arrays. Similarly, morphology and pattern control of nanowires is demonstrated with ICP-RIE of InP. Optical characterization of the InP nanowire arrays gives proof of principle that tapering and multiple wire radii can lead to near-unity absorption in sparse arrays of InP nanowires.
Resumo:
The objective of the work described in this dissertation is the development of new wireless passive force monitoring platforms for applications in the medical field, specifically monitoring lower limb prosthetics. The developed sensors consist of stress sensitive, magnetically soft amorphous metallic glass materials. The first technology is based on magnetoelastic resonance. Specifically, when exposed to an AC excitation field along with a constant DC bias field, the magnetoelastic material mechanically vibrates, and may reaches resonance if the field frequency matches the mechanical resonant frequency of the material. The presented work illustrates that an applied loading pins portions of the strip, effectively decreasing the strip length, which results in an increase in the frequency of the resonance. The developed technology is deployed in a prototype lower limb prosthetic sleeve for monitoring forces experienced by the distal end of the residuum. This work also reports on the development of a magnetoharmonic force sensor comprised of the same material. According to the Villari effect, an applied loading to the material results in a change in the permeability of the magnetic sensor which is visualized as an increase in the higher-order harmonic fields of the material. Specifically, by applying a constant low frequency AC field and sweeping the applied DC biasing field, the higher-order harmonic components of the magnetic response can be visualized. This sensor technology was also instrumented onto a lower limb prosthetic for proof of deployment; however, the magnetoharmonic sensor illustrated complications with sensor positioning and a necessity to tailor the interface mechanics between the sensing material and the surface being monitored. The novelty of these two technologies is in their wireless passive nature which allows for long term monitoring over the life time of a given device. Additionally, the developed technologies are low cost. Recommendations for future works include improving the system for real-time monitoring, useful for data collection outside of a clinical setting.
Resumo:
There is a growing recognition of the importance of the commensal intestinal microbiota in the development and later function of the central nervous system. Research using germ-free mice (mice raised without any exposure to microorganisms) has provided some of the most persuasive evidence for a role of these bacteria in gut-brain signalling. Key findings show that the microbiota is necessary for normal stress responsivity, anxiety-like behaviors, sociability, and cognition. Furthermore, the microbiota maintains central nervous system homeostasis by regulating immune function and blood brain barrier integrity. Studies have also found that the gut microbiota influences neurotransmitter, synaptic, and neurotrophic signalling systems and neurogenesis. The principle advantage of the germ-free mouse model is in proof-of-principle studies and that a complete microbiota or defined consortiums of bacteria can be introduced at various developmental time points. However, a germ-free upbringing can induce permanent neurodevelopmental deficits that may deem the model unsuitable for specific scientific queries that do not involve early-life microbial deficiency. As such, alternatives and complementary strategies to the germ-free model are warranted and include antibiotic treatment to create microbiota-deficient animals at distinct time points across the lifespan. Increasing our understanding of the impact of the gut microbiota on brain and behavior has the potential to inform novel management strategies for stress-related gastrointestinal and neuropsychiatric disorders.
Resumo:
Things change. Words change, meaning changes and use changes both words and meaning. In information access systems this means concept schemes such as thesauri or clas- sification schemes change. They always have. Concept schemes that have survived have evolved over time, moving from one version, often called an edition, to the next. If we want to manage how words and meanings - and as a conse- quence use - change in an effective manner, and if we want to be able to search across versions of concept schemes, we have to track these changes. This paper explores how we might expand SKOS, a World Wide Web Consortium (W3C) draft recommendation in order to do that kind of tracking.The Simple Knowledge Organization System (SKOS) Core Guide is sponsored by the Semantic Web Best Practices and Deployment Working Group. The second draft, edited by Alistair Miles and Dan Brickley, was issued in November 2005. SKOS is a “model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, other types of controlled vocabulary and also concept schemes embedded in glossaries and terminologies” in RDF. How SKOS handles version in concept schemes is an open issue. The current draft guide suggests using OWL and DCTERMS as mechanisms for concept scheme revision.As it stands an editor of a concept scheme can make notes or declare in OWL that more than one version exists. This paper adds to the SKOS Core by introducing a tracking sys- tem for changes in concept schemes. We call this tracking system vocabulary ontogeny. Ontogeny is a biological term for the development of an organism during its lifetime. Here we use the ontogeny metaphor to describe how vocabularies change over their lifetime. Our purpose here is to create a conceptual mechanism that will track these changes and in so doing enhance information retrieval and prevent document loss through versioning, thereby enabling persistent retrieval.
Resumo:
Following the seminal work of Zhuang, connected Hopf algebras of finite GK-dimension over algebraically closed fields of characteristic zero have been the subject of several recent papers. This thesis is concerned with continuing this line of research and promoting connected Hopf algebras as a natural, intricate and interesting class of algebras. We begin by discussing the theory of connected Hopf algebras which are either commutative or cocommutative, and then proceed to review the modern theory of arbitrary connected Hopf algebras of finite GK-dimension initiated by Zhuang. We next focus on the (left) coideal subalgebras of connected Hopf algebras of finite GK-dimension. They are shown to be deformations of commutative polynomial algebras. A number of homological properties follow immediately from this fact. Further properties are described, examples are considered and invariants are constructed. A connected Hopf algebra is said to be "primitively thick" if the difference between its GK-dimension and the vector-space dimension of its primitive space is precisely one . Building on the results of Wang, Zhang and Zhuang,, we describe a method of constructing such a Hopf algebra, and as a result obtain a host of new examples of such objects. Moreover, we prove that such a Hopf algebra can never be isomorphic to the enveloping algebra of a semisimple Lie algebra, nor can a semisimple Lie algebra appear as its primitive space. It has been asked in the literature whether connected Hopf algebras of finite GK-dimension are always isomorphic as algebras to enveloping algebras of Lie algebras. We provide a negative answer to this question by constructing a counterexample of GK-dimension 5. Substantial progress was made in determining the order of the antipode of a finite dimensional pointed Hopf algebra by Taft and Wilson in the 1970s. Our final main result is to show that the proof of their result can be generalised to give an analogous result for arbitrary pointed Hopf algebras.
Resumo:
This thesis work contains an overview of potential alternative options to couple formate produced from CO2 with other coupling partners than formate itself. Ultimately, the intent is to produce high value chemicals from CO2 at a high selectivity and conversion, whilst keeping the required utility of electrons in the electrochemical CO2 conversion at a minimum. To select and find new coupling partners, a framework was developed upon which a broad variety of candidates were assessed and ranked. A multi-stage process was used to select first potential classes of molecules. For each class, a variety of commercially available compounds was analysed in depth for its potential suitability in the reaction with the active carbonite intermediate. This analysis has shown that a wide variety of factors come into play and especially the reactivity of the hydride catalyst poses a mayor challenge. The three major potential classes of compounds suitable for the coupling are carbon oxides (CO2 & CO), and aldehydes. As a second step the remaining options were ranked to identify which compound to test first. In this ranking the reactants sustainability, ease of commercial operation and commercial attractiveness of the compound were considered. The highest-ranking compounds that proposed the highest potential are CO2, benzaldehyde and para-formaldehyde. In proof-of-principle experiments CO2 could successfully be incorporated in the form of carbonate, oxalate and potentially formate. The overall incorporation efficiency based on the hydride consumption was shown to be 50%. It is suggested to continue this work with mechanistic studies to understand the reaction in detail as, based on further gained knowledge, the reaction can then be optimized towards optimal CO2 incorporation in the form of oxalate.
Resumo:
From 2010, the Proton Radius has become one of the most interest value to determine. The first proof of not complete understanding of its internal structure was the measurement of the Lamb Shift using the muonic hydrogen, leading to a value 7σ lower. A new road so was open and the Proton Radius Puzzle epoch begun. FAMU Experiment is a project that tries to give an answer to this Puzzle implementing high precision experimental apparatus. The work of this thesis is based on the study, construction and first characterization of a new detection system. Thanks to the previous experiments and simulations, this apparatus is composed by 17 detectors positioned on a semicircular crown with the related electronic circuit. The detectors' characterization is based on the use of a LabView program controlling a digital potentiometer and on other two analog potentiometers, all three used to set the amplitude of each detector to a predefined value, around 1.2 V, set on the oscilloscope by which is possible to observe the signal. This is the requirement in order to have, in the final measurement, a single high peak given by the sum of all the signals coming from the detectors. Each signal has been acquired for almost half of an hour, but the entire circuit has been maintained active for more time to observe its capacity to work for longer periods. The principal results of this thesis are given by the spectra of 12 detectors and the corresponding values of Voltages, FWHM and Resolution. The outcomes of the acquisitions show also another expected behavior: the strong dependence of the detectors from the temperature, demonstrating that an its change causes fluctuations in the signal. In turn, these fluctuations will affect the spectrum, resulting in a shifting of the curve and a lower Resolution. On the other hand, a measurement performed in stable conditions will lead to accordance between the nominal and experimental measurements, as for the detectors 10, 11 and 12 of our system.
Resumo:
Nowadays, some activities, such as subscribing an insurance policy or opening a bank account, are possible by navigating through a web page or a downloadable application. Since the user is often “hidden” behind a monitor or a smartphone, it is necessary a solution able to guarantee about their identity. Companies are often requiring the submission of a “proof-of-identity”, which usually consists in a picture of an identity document of the user, together with a picture or a brief video of themselves. This work describes a system whose purpose is the automation of these kinds of verifications.
Resumo:
Concept formation depends on language and thought, that promote the integration of information coming from the senses. It is postulated that changes in the person, the objects and events to be known suggest flexible models of concept teaching. It is assumed that the same considerations apply to teaching concepts to blind pupils. Specificities of this process are discussed, including the role of touch as resource, although not as a direct substitute to vision, and the notion of representation as a basis for the elaboration of pedagogical resources for the blind student.
Resumo:
An (n, d)-expander is a graph G = (V, E) such that for every X subset of V with vertical bar X vertical bar <= 2n - 2 we have vertical bar Gamma(G)(X) vertical bar >= (d + 1) vertical bar X vertical bar. A tree T is small if it has at most n vertices and has maximum degree at most d. Friedman and Pippenger (1987) proved that any ( n; d)- expander contains every small tree. However, their elegant proof does not seem to yield an efficient algorithm for obtaining the tree. In this paper, we give an alternative result that does admit a polynomial time algorithm for finding the immersion of any small tree in subgraphs G of (N, D, lambda)-graphs Lambda, as long as G contains a positive fraction of the edges of Lambda and lambda/D is small enough. In several applications of the Friedman-Pippenger theorem, including the ones in the original paper of those authors, the (n, d)-expander G is a subgraph of an (N, D, lambda)-graph as above. Therefore, our result suffices to provide efficient algorithms for such previously non-constructive applications. As an example, we discuss a recent result of Alon, Krivelevich, and Sudakov (2007) concerning embedding nearly spanning bounded degree trees, the proof of which makes use of the Friedman-Pippenger theorem. We shall also show a construction inspired on Wigderson-Zuckerman expander graphs for which any sufficiently dense subgraph contains all trees of sizes and maximum degrees achieving essentially optimal parameters. Our algorithmic approach is based on a reduction of the tree embedding problem to a certain on-line matching problem for bipartite graphs, solved by Aggarwal et al. (1996).
Resumo:
We study the competition interface between two growing clusters in a growth model associated to last-passage percolation. When the initial unoccupied set is approximately a cone, we show that this interface has an asymptotic direction with probability 1. The behavior of this direction depends on the angle theta of the cone: for theta >= 180 degrees, the direction is deterministic, while for theta < 180 degrees, it is random, and its distribution can be given explicitly in certain cases. We also obtain partial results on the fluctuations of the interface around its asymptotic direction. The evolution of the competition interface in the growth model can be mapped onto the path of a second-class particle in the totally asymmetric simple exclusion process; from the existence of the limiting direction for the interface, we obtain a new and rather natural proof of the strong law of large numbers (with perhaps a random limit) for the position of the second-class particle at large times.