972 resultados para Time Integration
Resumo:
Atomic ions trapped in micro-fabricated surface traps can be utilized as a physical platform with which to build a quantum computer. They possess many of the desirable qualities of such a device, including high fidelity state preparation and readout, universal logic gates, long coherence times, and can be readily entangled with each other through photonic interconnects. The use of optical cavities integrated with trapped ion qubits as a photonic interface presents the possibility for order of magnitude improvements in performance in several key areas of their use in quantum computation. The first part of this thesis describes the design and fabrication of a novel surface trap for integration with an optical cavity. The trap is custom made on a highly reflective mirror surface and includes the capability of moving the ion trap location along all three trap axes with nanometer scale precision. The second part of this thesis demonstrates the suitability of small micro-cavities formed from laser ablated fused silica substrates with radii of curvature in the 300-500 micron range for use with the mirror trap as part of an integrated ion trap cavity system. Quantum computing applications for such a system include dramatic improvements in the photonic entanglement rate up to 10 kHz, the qubit measurement time down to 1 microsecond, and the measurement error rates down to the 10e-5 range. The final part of this thesis details a performance simulator for exploring the physical resource requirements and performance demands to scale such a quantum computer to sizes capable of performing quantum algorithms beyond the limits of classical computation.
Resumo:
The integration of mathematics and science in secondary schools in the 21st century continues to be an important topic of practice and research. The purpose of my research study, which builds on studies by Frykholm and Glasson (2005) and Berlin and White (2010), is to explore the potential constraints and benefits of integrating mathematics and science in Ontario secondary schools based on the perspectives of in-service and pre-service teachers with various math and/or science backgrounds. A qualitative and quantitative research design with an exploratory approach was used. The qualitative data was collected from a sample of 12 in-service teachers with various math and/or science backgrounds recruited from two school boards in Eastern Ontario. The quantitative and some qualitative data was collected from a sample of 81 pre-service teachers from the Queen’s University Bachelor of Education (B.Ed) program. Semi-structured interviews were conducted with the in-service teachers while a survey and a focus group was conducted with the pre-service teachers. Once the data was collected, the qualitative data were abductively analyzed. For the quantitative data, descriptive and inferential statistics (one-way ANOVAs and Pearson Chi Square analyses) were calculated to examine perspectives of teachers regardless of teaching background and to compare groups of teachers based on teaching background. The findings of this study suggest that in-service and pre-service teachers have a positive attitude towards the integration of math and science and view it as valuable to student learning and success. The pre-service teachers viewed the integration as easy and did not express concerns to this integration. On the other hand, the in-service teachers highlighted concerns and challenges such as resources, scheduling, and time constraints. My results illustrate when teachers perceive it is valuable to integrate math and science and which aspects of the classroom benefit best from the integration. Furthermore, the results highlight barriers and possible solutions to better the integration of math and science. In addition to the benefits and constraints of integration, my results illustrate why some teachers may opt out of integrating math and science and the different strategies teachers have incorporated to integrate math and science in their classroom.
Resumo:
The use of serious games in education and their pedagogical benefit is being widely recognized. However, effective integration of serious games in education depends on addressing two big challenges: the successful incorporation of motivation and engagement that can lead to learning; and the highly specialised skills associated with customised development to meet the required pedagogical objectives. This paper presents the Westminster Serious Games Platform (wmin-SGP) an authoring tool that allows educators/domain experts without games design and development technical skills to create bespoke roleplay simulations in three dimensional scenes featuring fully embodied virtual humans capable of verbal and non-verbal interaction with users fit for specific educational objectives. The paper presents the wmin-SGP system architecture and it evaluates its effectiveness in fulfilling its purpose via the implementation of two roleplay simulations, one for Politics and one for Law. In addition, it presents the results of two types of evaluation that address how successfully the wmin-SGP combines usability principles and game core drives based on the Octalysis gamification framework that lead to motivating games experiences. The evaluation results shows that the wmin-SGP: provides an intuitive environment and tools that support users without advanced technical skills to create in real-time bespoke roleplay simulations in advanced graphical interfaces; satisfies most of the usability principles; and provides balanced simulations based on the Octalysis framework core drives. The paper concludes with a discussion of future extension of this real time authoring tool and directions for further development of the Octalysis framework to address learning.
Resumo:
This paper explains how the practice of integrating ecosystem-service thinking (i.e., ecological benefits for human beings) and institutions (i.e., organisations, policy rules) is essential for coastal spatial planning. Adopting an integrated perspective on ecosystem services (ESs) both helps understand a wide range of possible services and, at the same time, attune institution to local resource patterns. The objective of this paper is to identify the extent to which ESs are integrated in a specific coastal strategic planning case. A subsequent objective is to understand whether institutions are capable of managing ESs in terms of uncovering institutional strengths and weaknesses that may exist in taking ESs into account in existing institutional practices. These two questions are addressed through the application of a content analysis method and a multi-level analysis framework on formal institutions. Jiaozhou Bay in China is used as an illustrative case. The results show that some ESs have been implicitly acknowledged, but by no means the whole range. This partial ES implementation could result from any of four institutional weaknesses in the strategic plans of Jiaozhou Bay, namely a dominant market oriented interest, fragmented institutional structures for managing ESs, limited ES assessment, and a lack of integrated reflection of the social value of ESs in decision-making. Finally, generalizations of multi-level institutional settings on ES integration, such as an inter-organisational fragmentation and a limited use of ES assessment in operation, are made together with other international case studies. Meanwhile, the comparison highlights the influences of extensive market-oriented incentives and governments' exclusive responsibilities on ES governance in the Chinese context.
Resumo:
Multiple myeloma is characterized by genomic alterations frequently involving gains and losses of chromosomes. Single nucleotide polymorphism (SNP)-based mapping arrays allow the identification of copy number changes at the sub-megabase level and the identification of loss of heterozygosity (LOH) due to monosomy and uniparental disomy (UPD). We have found that SNP-based mapping array data and fluorescence in situ hybridization (FISH) copy number data correlated well, making the technique robust as a tool to investigate myeloma genomics. The most frequently identified alterations are located at 1p, 1q, 6q, 8p, 13, and 16q. LOH is found in these large regions and also in smaller regions throughout the genome with a median size of 1 Mb. We have identified that UPD is prevalent in myeloma and occurs through a number of mechanisms including mitotic nondisjunction and mitotic recombination. For the first time in myeloma, integration of mapping and expression data has allowed us to reduce the complexity of standard gene expression data and identify candidate genes important in both the transition from normal to monoclonal gammopathy of unknown significance (MGUS) to myeloma and in different subgroups within myeloma. We have documented these genes, providing a focus for further studies to identify and characterize those that are key in the pathogenesis of myeloma.
Resumo:
This thesis examines the impact on child and adolescent psychotherapists within CAMHS of the introduction of routine outcome measures (ROMs) associated with the Children and Young People’s Improving access to Psychological Therapies programme (CYP-IAPT). All CAMHS therapists working within a particular NHS mental health Trust1 were required to trial CYP-IAPT ROMs as part of their everyday clinical practice from October 2013-September 2014. During this period considerable freedom was allowed as to which of the measures each therapist used and at what frequency. In order to assess the impact of CYP-IAPT ROMs on child psychotherapy, I conducted semi-structured interviews with eight psychotherapists within a particular CAMHS partnership within one NHS Trust. Each statement was coded and grouped according to whether it related to initial (generic) assessment, goal setting / monitoring, monitoring on-going progress, therapeutic alliance, or to issues concerning how data might be used or interpreted by managers and commissioners. Analysis of interviews revealed greatest concern about session-by session ROMs, as these are felt to impact most significantly on psychotherapy; therapists felt that session-by-session ROMs do not take account of negative transference relationships, they are overly repetitive and used to reward / punish the therapist. Measures used at assessment and review were viewed as most compatible with psychotherapy, although often experienced as excessively time consuming. The Goal Based Outcome Measure was generally experienced as compatible with psychotherapy so long as goals are formed collaboratively between therapist and young person. There was considerable anxiety about how data may be (mis)used and (mis)interpreted by managers and commissioners, for example to end treatment prematurely, trigger change of therapist in the face of negative ROMs data, or to damage psychotherapy. Use of ROMs for short term and generic work was experienced as less intrusive and contentious.
Resumo:
The automated transfer of flight logbook information from aircrafts into aircraft maintenance systems leads to reduced ground and maintenance time and is thus desirable from an economical point of view. Until recently, flight logbooks have not been managed electronically in aircrafts or at least the data transfer from aircraft to ground maintenance system has been executed manually. Latest aircraft types such as the Airbus A380 or the Boeing 787 do support an electronic logbook and thus make an automated transfer possible. A generic flight logbook transfer system must deal with different data formats on the input side – due to different aircraft makes and models – as well as different, distributed aircraft maintenance systems for different airlines as aircraft operators. This article contributes the concept and top level distributed system architecture of such a generic system for automated flight log data transfer. It has been developed within a joint industry and applied research project. The architecture has already been successfully evaluated in a prototypical implementation.
Resumo:
The main drivers for the development and evolution of Cyber Physical Systems (CPS) are the reduction of development costs and time along with the enhancement of the designed products. The aim of this survey paper is to provide an overview of different types of system and the associated transition process from mechatronics to CPS and cloud-based (IoT) systems. It will further consider the requirement that methodologies for CPS-design should be part of a multi-disciplinary development process within which designers should focus not only on the separate physical and computational components, but also on their integration and interaction. Challenges related to CPS-design are therefore considered in the paper from the perspectives of the physical processes, computation and integration respectively. Illustrative case studies are selected from different system levels starting with the description of the overlaying concept of Cyber Physical Production Systems (CPPSs). The analysis and evaluation of the specific properties of a sub-system using a condition monitoring system, important for the maintenance purposes, is then given for a wind turbine.
Resumo:
Force plate or pressure plate analysis came as an innovative tool to biomechanics and sport medicine -- This allows engineers, scientists and doctors to virtually reconstruct the way a person steps while running or walking using a measuring system and a computer -- With this information they can calculate and analyze a whole set of variables and factors that characterize the step -- Then they are able to make corrections and/or optimizations, designing appropriate shoes and insoles for the patient -- The idea is to study and understand all the hardware and software implications of this process and all the components involved, and then propose an alternative solution -- This solution should have at least similar performance to existing systems -- It should increase the accuracy and/or the sampling frequency to obtain better results -- By the end, there should be a working prototype of a pressure measuring system and a mathematical model to govern it -- The costs of the system have to be lower than most of the systems in the market
Resumo:
The last two decades have seen many exciting examples of tiny robots from a few cm3 to less than one cm3. Although individually limited, a large group of these robots has the potential to work cooperatively and accomplish complex tasks. Two examples from nature that exhibit this type of cooperation are ant and bee colonies. They have the potential to assist in applications like search and rescue, military scouting, infrastructure and equipment monitoring, nano-manufacture, and possibly medicine. Most of these applications require the high level of autonomy that has been demonstrated by large robotic platforms, such as the iRobot and Honda ASIMO. However, when robot size shrinks down, current approaches to achieve the necessary functions are no longer valid. This work focused on challenges associated with the electronics and fabrication. We addressed three major technical hurdles inherent to current approaches: 1) difficulty of compact integration; 2) need for real-time and power-efficient computations; 3) unavailability of commercial tiny actuators and motion mechanisms. The aim of this work was to provide enabling hardware technologies to achieve autonomy in tiny robots. We proposed a decentralized application-specific integrated circuit (ASIC) where each component is responsible for its own operation and autonomy to the greatest extent possible. The ASIC consists of electronics modules for the fundamental functions required to fulfill the desired autonomy: actuation, control, power supply, and sensing. The actuators and mechanisms could potentially be post-fabricated on the ASIC directly. This design makes for a modular architecture. The following components were shown to work in physical implementations or simulations: 1) a tunable motion controller for ultralow frequency actuation; 2) a nonvolatile memory and programming circuit to achieve automatic and one-time programming; 3) a high-voltage circuit with the highest reported breakdown voltage in standard 0.5 μm CMOS; 4) thermal actuators fabricated using CMOS compatible process; 5) a low-power mixed-signal computational architecture for robotic dynamics simulator; 6) a frequency-boost technique to achieve low jitter in ring oscillators. These contributions will be generally enabling for other systems with strict size and power constraints such as wireless sensor nodes.
Resumo:
The current study investigated the cognitive workload of sentence and clause wrap-up in younger and older readers. A large number of studies have demonstrated the presence of wrap-up effects, peaks in processing time at clause and sentence boundaries that some argue reflect attention to organizational and integrative semantic processes. However, the exact nature of these wrap-up effects is still not entirely clear, with some arguing that wrap-up is not related to processing difficulty, but rather is triggered by a low-level oculomotor response or the implicit monitoring of intonational contour. The notion that wrap-up effects are resource-demanding was directly tested by examining the degree to which sentence and clause wrap-up affects the parafoveal preview benefit. Older and younger adults read passages in which a target word N occurred in a sentence-internal, clause-final, or sentence-final position. A gaze-contingent boundary change paradigm was used in which, on some trials, a non-word preview of word N+1 was replaced by a target word once the eyes crossed an invisible boundary located between words N and N+1. All measures of reading time on word N were longer at clause and sentence boundaries than in the sentence-internal position. In the earliest measures of reading time, sentence and clause wrap-up showed evidence of reducing the magnitude of the preview benefit similarly for younger and older adults. However, this effect was moderated by age in gaze duration, such that older adults showed a complete reduction in the preview benefit in the sentence-final condition. Additionally, sentence and clause wrap-up were negatively associated with the preview benefit. Collectively, the findings from the current study suggest that wrap-up is cognitively demanding and may be less efficient with age, thus, resulting in a reduction of the parafoveal preview during normal reading.
Resumo:
Numerous components of the Arctic freshwater system (atmosphere, ocean, cryosphere, terrestrial hydrology) have experienced large changes over the past few decades, and these changes are projected to amplify further in the future. Observations are particularly sparse, both in time and space, in the Polar Regions. Hence, modeling systems have been widely used and are a powerful tool to gain understanding on the functioning of the Arctic freshwater system and its integration within the global Earth system and climate. Here, we present a review of modeling studies addressing some aspect of the Arctic freshwater system. Through illustrative examples, we point out the value of using a hierarchy of models with increasing complexity and component interactions, in order to dismantle the important processes at play for the variability and changes of the different components of the Arctic freshwater system and the interplay between them. We discuss past and projected changes for the Arctic freshwater system and explore the sources of uncertainty associated with these model results. We further elaborate on some missing processes that should be included in future generations of Earth system models and highlight the importance of better quantification and understanding of natural variability, amongst other factors, for improved predictions of Arctic freshwater system change.
Resumo:
Wireless sensor networks (WSNs) are the key enablers of the internet of things (IoT) paradigm. Traditionally, sensor network research has been to be unlike the internet, motivated by power and device constraints. The IETF 6LoWPAN draft standard changes this, defining how IPv6 packets can be efficiently transmitted over IEEE 802.15.4 radio links. Due to this 6LoWPAN technology, low power, low cost micro- controllers can be connected to the internet forming what is known as the wireless embedded internet. Another IETF recommendation, CoAP allows these devices to communicate interactively over the internet. The integration of such tiny, ubiquitous electronic devices to the internet enables interesting real-time applications. This thesis work attempts to evaluate the performance of a stack consisting of CoAP and 6LoWPAN over the IEEE 802.15.4 radio link using the Contiki OS and Cooja simulator, along with the CoAP framework Californium (Cf). Ultimately, the implementation of this stack on real hardware is carried out using a raspberry pi as a border router with T-mote sky sensors as slip radios and CoAP servers relaying temperature and humidity data. The reliability of the stack was also demonstrated during scalability analysis conducted on the physical deployment. The interoperability is ensured by connecting the WSN to the global internet using different hardware platforms supported by Contiki and without the use of specialized gateways commonly found in non IP based networks. This work therefore developed and demonstrated a heterogeneous wireless sensor network stack, which is IP based and conducted performance analysis of the stack, both in terms of simulations and real hardware.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
The exorbitant privilege literature analyzes the positive differential returns on net foreign assets enjoyed by the United States in the last quarter of the twentieth century as the issuer of the global reserve currency. In the first age of international financial integration (1870-1914), the global reserve currency of the period was the British pound sterling. Whether the United Kingdom enjoyed a similar privilege is analyzed with a new dataset, encompassing microdata on railroad and government financial securities. The use of microdata avoids the flaws that have plagued the US studies, particularly the use of incompatible aggregate variables. New measures of Britain’s net external position provide estimates on capital gains and dividend yields. As the issuer of the global reserve currency, Britain received average revenues of 13.4% of GDP from its international investment position. The country satisfied the necessary condition for the existence of an exorbitant privilege. Nonetheless, Britain’s case is slightly different from the American one. British external assets received higher returns than were paid on external liabilities for each class, but British invested mostly in securities with low profile of risk. The low return on its net external position meant that, for most of the time, Britain would not receive positive revenues from the rest of the world if it were a net debtor country, but this pattern changed after 1900. The finding supports the claim that, at least partially, exorbitant privilege is a general characteristic of the issuer of the global reserve currency and not unique to the late twentieth century US.