869 resultados para Multiple input and multiple output autonomous flight systems
Resumo:
This paper reviews the literature concerning the practice of using Online Analytical Processing (OLAP) systems to recall information stored by Online Transactional Processing (OLTP) systems. Such a review provides a basis for discussion on the need for the information that are recalled through OLAP systems to maintain the contexts of transactions with the data captured by the respective OLTP system. The paper observes an industry trend involving the use of OLTP systems to process information into data, which are then stored in databases without the business rules that were used to process information and data stored in OLTP databases without associated business rules. This includes the necessitation of a practice, whereby, sets of business rules are used to extract, cleanse, transform and load data from disparate OLTP systems into OLAP databases to support the requirements for complex reporting and analytics. These sets of business rules are usually not the same as business rules used to capture data in particular OLTP systems. The paper argues that, differences between the business rules used to interpret these same data sets, risk gaps in semantics between information captured by OLTP systems and information recalled through OLAP systems. Literature concerning the modeling of business transaction information as facts with context as part of the modelling of information systems were reviewed to identify design trends that are contributing to the design quality of OLTP and OLAP systems. The paper then argues that; the quality of OLTP and OLAP systems design has a critical dependency on the capture of facts with associated context, encoding facts with contexts into data with business rules, storage and sourcing of data with business rules, decoding data with business rules into the facts with the context and recall of facts with associated contexts. The paper proposes UBIRQ, a design model to aid the co-design of data with business rules storage for OLTP and OLAP purposes. The proposed design model provides the opportunity for the implementation and use of multi-purpose databases, and business rules stores for OLTP and OLAP systems. Such implementations would enable the use of OLTP systems to record and store data with executions of business rules, which will allow for the use of OLTP and OLAP systems to query data with business rules used to capture the data. Thereby ensuring information recalled via OLAP systems preserves the contexts of transactions as per the data captured by the respective OLTP system.
Resumo:
Countless cities are rapidly developing across the globe, pressing the need for clear urban planning and design recommendations geared towards sustainability. This article examines the intersections of Jane Jacobs’ four conditions for diversity with low-carbon and low-energy use urban systems in four cities around the world: Lyon (France), Chicago (United-States), Kolkata (India), and Singapore (Singapore). After reviewing Jacobs’ four conditions for diversity, we introduce the four cities and describe their historical development context. We then present a framework to study the cities along three dimensions: population and density, infrastructure development/use, and climate and landscape. These cities differ in many respects and their analysis is instructive for many other cities around the globe. Jacobs’ conditions are present in all of them, manifested in different ways and to varying degrees. Overall we find that the adoption of Jacobs' conditions seems to align well with concepts of low-carbon urban systems, with their focus on walkability, transit-oriented design, and more efficient land use (i.e., smaller unit sizes). Transportation sector emissions seems to demonstrate a stronger influence from the presence of Jacobs' conditions, while the link was less pronounced in the building sector. Kolkata, a low-income, developing world city, seems to possess many of Jacobs' conditions, while exhibiting low per capita emissions - maintaining both of these during its economic expansion will take careful consideration. Greenhouse gas mitigation, however, is inherently an in situ problem and the first task must therefore be to gain local knowledge of an area before developing strategies to lower its carbon footprint.
Resumo:
Pineal melatonin release exhibits a circadian rhythm with a tight nocturnal pattern. Melatonin synthesis is regulated by the master circadian clock within the hypothalamic suprachiasmatic nucleus (SCN) and is also directly inhibited by light. The SCN is necessary for both circadian regulation and light inhibition of melatonin synthesis and thus it has been difficult to isolate these two regulatory limbs to define the output pathways by which the SCN conveys circadian and light phase information to the pineal. A 22-h light-dark (LD) cycle forced desynchrony protocol leads to the stable dissociation of rhythmic clock gene expression within the ventrolateral SCN (vlSCN) and the dorsomedial SCN (dmSCN). In the present study, we have used this protocol to assess the pattern of melatonin release under forced desynchronization of these SCN subregions. In light of our reported patterns of clock gene expression in the forced desynchronized rat, we propose that the vlSCN oscillator entrains to the 22-h LD cycle whereas the dmSCN shows relative coordination to the light-entrained vlSCN, and that this dual-oscillator configuration accounts for the pattern of melatonin release. We present a simple mathematical model in which the relative coordination of a single oscillator within the dmSCN to a single light-entrained oscillator within the vlSCN faithfully portrays the circadian phase, duration and amplitude of melatonin release under forced desynchronization. Our results underscore the importance of the SCN`s subregional organization to both photic input processing and rhythmic output control.
Resumo:
Combining the results of behavioral, neuronal immediate early gene activation, lesion and neuroanatomical experiments, we have presently investigated the role of the superior colliculus (SC) in predatory hunting. First, we have shown that insect hunting is associated with a characteristic large increase in Fos expression in the lateral part of the intermediate gray layer of the SC (Wig). Next, we have shown that animals with bilateral NMDA lesions of the lateral parts of the SC presented a significant delay in starting to chase the prey and longer periods engaged in other activities than predatory hunting. They also showed a clear deficit to orient themselves toward the moving prey and lost the stereotyped sequence of actions seen for capturing, holding and killing the prey. Our Phaseolus vulgaris-leucoagglutinin analysis revealed that the lateral SCig, besides providing the well-documented descending crossed pathway to premotor sites in brainstem and spinal cord, projects to a number of midbrain and diencephalic sites likely to influence key functions in the context of the predatory behavior, such as general levels of arousal, motivational level to hunt or forage, behavioral planning, appropriate selection of the basal ganglia motor plan to hunt, and motor output of the primary motor cortex. In contrast to the lateral SC lesions, medial SC lesions produced a small deficit in predatory hunting, and compared to what we have seen for the lateral SCig, the medial SCig has a very limited set of projections to thalamic sites related to the control of motor planning or motor output, and provides conspicuous inputs to brainstem sites involved in organizing a wide range of anti-predatory defensive responses. Overall, the present results served to clarify how the different functional domains in the SC may mediate the decision to pursue and hunt a prey or escape from a predator. (C) 2010 IBRO. Published by Elsevier Ltd. All rights reserved.
Resumo:
Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil. This paper describes an algorithm, and a tool based on it, designed for the authoring and automatic checking of geometry exercises. The algorithm dynamically compares the distances between the geometric objects of the student`s solution and the template`s solution, provided by the author of the exercise. Each solution is a geometric construction which is considered a function receiving geometric objects (input) and returning other geometric objects (output). Thus, for a given problem, if we know one function (construction) that solves the problem, we can compare it to any other function to check whether they are equivalent or not. Two functions are equivalent if, and only if, they have the same output when the same input is applied. If the student`s solution is equivalent to the template`s solution, then we consider the student`s solution as a correct solution. Our software utility provides both authoring and checking tools to work directly on the Internet, together with learning management systems. These tools are implemented using the dynamic geometry software, iGeom, which has been used in a geometry course since 2004 and has a successful track record in the classroom. Empowered with these new features, iGeom simplifies teachers` tasks, solves non-trivial problems in student solutions and helps to increase student motivation by providing feedback in real time. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The diffusion of Concentrating Solar Power Systems (CSP) systems is currently taking place at a much slower pace than photovoltaic (PV) power systems. This is mainly because of the higher present cost of the solar thermal power plants, but also for the time that is needed in order to build them. Though economic attractiveness of different Concentrating technologies varies, still PV power dominates the market. The price of CSP is expected to drop significantly in the near future and wide spread installation of them will follow. The main aim of this project is the creation of different relevant case studies on solar thermal power generation and a comparison betwwen them. The purpose of this detailed comparison is the techno-economic appraisal of a number of CSP systems and the understanding of their behaviour under various boundary conditions. The CSP technologies which will be examined are the Parabolic Trough, the Molten Salt Power Tower, the Linear Fresnel Mirrors and the Dish Stirling. These systems will be appropriatly sized and simulated. All of the simulations aim in the optimization of the particular system. This includes two main issues. The first is the achievement of the lowest possible levelized cost of electricity and the second is the maximization of the annual energy output (kWh). The project also aims in the specification of these factors which affect more the results and more specifically, in what they contribute to the cost reduction or the power generation. Also, photovoltaic systems will be simulated under same boundary conditions to facolitate a comparison between the PV and the CSP systems. Last but not leats, there will be a determination of the system which performs better in each case study.
Resumo:
Användandet av mobila applikationer har växt radikalt de senaste åren och de samverkar med många system. Därför ställs det högre krav på kvaliteten och att applikationen ska anpassas till många olika enheter, operativsystem samt plattformar. Detta gör att test av mobila applikationer blivit viktigare och större. Detta arbete har bedrivits som en jämförande fallstudie inom området test av mobila applikationer samt testverktyg. Syftet har varit att beskriva hur testning av mobila applikationer sker idag vilket gjorts genom litteraturstudier och intervjuer med IT-företag. Ett annat syfte har varit att utvärdera fyra testverktyg, deras för- och nackdelar samt hur de kan användas vid testning av mobila applikationer och jämföras mot manuell testning utan testverktyg. Detta har gjorts genom att skapa förstahandserfarenheter baserat på användandet av testverktygen. Under arbetet har vi utgått från mobila applikationer som vi fått tillgång till av Triona, som varit vår samarbetspartner.Idag finns många olika testverktyg som kan användas som stöd för testningen men få företag har implementerat något eftersom det kräver både tid och kompetens samt valet av testverktyg kan vara svårt. Testverktygen har olika för- och nackdelar vilket gör att de passar olika bra beroende på typ av projekt och applikation. Fördelar med att använda testverktyg är möjligheten att kunna automatisera, testa på flera enheter samtidigt samt få tillgång till enheter via molnet. Utmaningarna är att det kan vara svårt att installera och lära sig testverktyget samt att licenserna kan vara dyra. Det är därför viktigt att redan innan implementationen veta vilka tester och applikationer testverktygen ska användas till samt vem som ska använda det. Utifrån vår studie kan slutsatsen dras att inget testverktyg är helt komplett men de kan bidra med olika funktioner vilket effektiviserar delar av testningen av mobila applikationer.
Resumo:
The aim of this thesis is to examine the early vocabulary development of a sample of Swedish children in relation to parental input and early communicative skills. Three studies are situated in an overall description of early language development in children. The data analyzed in the thesis was collected within a larger project at Stockholm University (SPRINT- “Effects of enhanced parental input on young children’s vocabulary development and subsequent literacy development” [VR 2008-5094]). Data analysis was based on parental report via SECDI, the Swedish version of the MacArthur-Bates Communicative Development Inventories, and audio recordings. One study examined parental verbal interaction characteristics in three groups of children with varying vocabulary size at 18 months. The stability of vocabulary development at 18 and 24 months was investigated in a larger study, with focus on children’s vocabulary composition and grammatical abilities. The third study examined interrelations among early gestures, receptive and productive vocabulary, and grammar measured with M3L, i.e. three longest utterances, from 12 to 30 months. Overall results of the thesis highlight the importance of early language development. Variability in different characteristics in parental input is associated with variability in child vocabulary size. Children with large early vocabularies exhibit the most stability in vocabulary composition and the earliest grammatical development. Children’s vocabulary composition may reflect individual stylistic variation. Use of early gestures is associated differentially with receptive and productive vocabulary. Results of the thesis have implications for parents, child- and healthcare personnel, as well as researchers and educational practitioners. The results underscore the importance of high quality in adult-child interaction, with rich input fine-tuned to children’s developmental levels and age, together with high awareness of early language development.
Resumo:
Solar plus heat pump systems are often very complex in design, with sometimes special heat pump arrangements and control. Therefore detailed heat pump models can give very slow system simulations and still not so accurate results compared to real heat pump performance in a system. The idea here is to start from a standard measured performance map of test points for a heat pump according to EN 14825 and then determine characteristic parameters for a simplified correlation based model of the heat pump. By plotting heat pump test data in different ways including power input and output form and not only as COP, a simplified relation could be seen. By using the same methodology as in the EN 12975 QDT part in the collector test standard it could be shown that a very simple model could describe the heat pump test data very accurately, by identifying 4 parameters in the correlation equation found. © 2012 The Authors.
Resumo:
Many contaminants are currently unregulated by the government and do not have a set limit, known as the Maximum Contaminant Level, which is dictated by cost and the best available treatment technology. The Maximum Contaminant Level Goal, on the other hand, is based solely upon health considerations and is non-enforceable. In addition to being naturally occurring, contaminants may enter drinking water supplies through industrial sources, agricultural practices, urban pollution, sprawl, and water treatment byproducts. Exposure to these contaminants is not limited to ingestion and can also occur through dermal absorption and inhalation in the shower. Health risks for the general public include skin damage, increased risk of cancer, circulatory problems, and multiple toxicities. At low levels, these contaminants generally are not harmful in our drinking water. However, children, pregnant women, and people with compromised immune systems are more vulnerable to the health risks associated with these contaminants. Vulnerable peoples should take additional precautions with drinking water. This research project was conducted in order to learn more about our local drinking water and to characterize our exposure to contaminants. We hope to increase public awareness of water quality issues by educating the local residents about their drinking water in order to promote public health and minimize exposure to some of the contaminants contained within public water supplies.
Resumo:
This thesis presents the study and development of fault-tolerant techniques for programmable architectures, the well-known Field Programmable Gate Arrays (FPGAs), customizable by SRAM. FPGAs are becoming more valuable for space applications because of the high density, high performance, reduced development cost and re-programmability. In particular, SRAM-based FPGAs are very valuable for remote missions because of the possibility of being reprogrammed by the user as many times as necessary in a very short period. SRAM-based FPGA and micro-controllers represent a wide range of components in space applications, and as a result will be the focus of this work, more specifically the Virtex® family from Xilinx and the architecture of the 8051 micro-controller from Intel. The Triple Modular Redundancy (TMR) with voters is a common high-level technique to protect ASICs against single event upset (SEU) and it can also be applied to FPGAs. The TMR technique was first tested in the Virtex® FPGA architecture by using a small design based on counters. Faults were injected in all sensitive parts of the FPGA and a detailed analysis of the effect of a fault in a TMR design synthesized in the Virtex® platform was performed. Results from fault injection and from a radiation ground test facility showed the efficiency of the TMR for the related case study circuit. Although TMR has showed a high reliability, this technique presents some limitations, such as area overhead, three times more input and output pins and, consequently, a significant increase in power dissipation. Aiming to reduce TMR costs and improve reliability, an innovative high-level technique for designing fault-tolerant systems in SRAM-based FPGAs was developed, without modification in the FPGA architecture. This technique combines time and hardware redundancy to reduce overhead and to ensure reliability. It is based on duplication with comparison and concurrent error detection. The new technique proposed in this work was specifically developed for FPGAs to cope with transient faults in the user combinational and sequential logic, while also reducing pin count, area and power dissipation. The methodology was validated by fault injection experiments in an emulation board. The thesis presents comparison results in fault coverage, area and performance between the discussed techniques.
Resumo:
Currently the uncertain system has attracted much academic community from the standpoint of scientific research and also practical applications. A series of mathematical approaches emerge in order to troubleshoot the uncertainties of real physical systems. In this context, the work presented here focuses on the application of control theory in a nonlinear dynamical system with parametric variations in order and robustness. We used as the practical application of this work, a system of tanks Quanser associates, in a configuration, whose mathematical model is represented by a second order system with input and output (SISO). The control system is performed by PID controllers, designed by various techniques, aiming to achieve robust performance and stability when subjected to parameter variations. Other controllers are designed with the intention of comparing the performance and robust stability of such systems. The results are obtained and compared from simulations in Matlab-simulink.
Resumo:
The rural electrification is characterized by geographical dispersion of the population, low consumption, high investment by consumers and high cost. Moreover, solar radiation constitutes an inexhaustible source of energy and in its conversion into electricity photovoltaic panels are used. In this study, equations were adjusted to field conditions presented by the manufacturer for current and power of small photovoltaic systems. The mathematical analysis was performed on the photovoltaic rural system I- 100 from ISOFOTON, with power 300 Wp, located at the Experimental Farm Lageado of FCA/UNESP. For the development of such equations, the circuitry of photovoltaic cells has been studied to apply iterative numerical methods for the determination of electrical parameters and possible errors in the appropriate equations in the literature to reality. Therefore, a simulation of a photovoltaic panel was proposed through mathematical equations that were adjusted according to the data of local radiation. The results have presented equations that provide real answers to the user and may assist in the design of these systems, once calculated that the maximum power limit ensures a supply of energy generated. This real sizing helps establishing the possible applications of solar energy to the rural producer and informing the real possibilities of generating electricity from the sun.