893 resultados para Logic, Symbolic and mathematical
Resumo:
BACKGROUND OR CONTEXT The higher education sector plays an important role in encouraging students into the STEM pipeline through fostering partnerships with schools, building on universities long tradition in engagement and outreach to secondary schools. Numerous activities focus on integrated STEM learning experiences aimed at developing conceptual scientific and mathematical knowledge with opportunities for students to show and develop skills in working with each other and actively engaging in discussion, decision making and collaborative problem solving. (NAS, 2013; AIG, 2015; OCS, 2014). This highlights the importance of the development and delivery of engaging integrated STEM activities connected to the curriculum to inspire the next generation of scientists and engineers and generally preparing students for post-secondary success. The broad research objective is to gain insight into which engagement activities and to what level they influence secondary school students’ selection of STEM-related career choices at universities. PURPOSE OR GOAL To evaluate and determine the effectiveness of STEM engagement activities impacting student decision making in choosing a STEM-related degree choice at university. APPROACH A survey was conducted with first-year domestic students studying STEM-related fieldswithin the Science and Engineering Faculty at Queensland University of Technology. Of the domestic students commencing in 2015, 29% responded to the survey. The survey was conducted using Survey Monkey and included a variety of questions ranging from academic performance at school to inspiration for choosing a STEM degree. Responses were analysed on a range of factors to evaluate the influence on students’ decisions to study STEM and whether STEM high school engagement activities impacted these decisions. To achieve this the timing of decision making for students choice in study area, degree, and university is compared with the timing of STEM engagement activities. DISCUSSION Statistical analysis using SPSS was carried out on survey data looking at reasons for choosing STEM degrees in terms of gender, academic performance and major influencers in their decision making. It was found that students choose their university courses based on what subjects they enjoyed and exceled at in school. These results found a high correlation between enjoyment of a school subject and their interest in pursuing this subject at university and beyond. Survey results indicated students are heavily influenced by their subject teachers and parents in their choice of STEM-related disciplines. In terms of career choice and when students make their decision, 60% have decided on a broad area of study by year 10, whilst only 15% had decided on a specific course and 10% had decided on which university. The timing of secondary STEM engagement activities is seen as a critical influence on choosing STEM disciplines or selection of senior school subjects with 80% deciding on specific degree between year 11 and 12 and 73% making a decision on which university in year 12. RECOMMENDATIONS/IMPLICATIONS/CONCLUSION Although the data does not support that STEM engagement activities increase the likelihood of STEM-related degree choice, the evidence suggests the students who have participated in STEM activities associate their experiences with their choice to pursue a STEM-related course. It is important for universities to continue to provide quality engaging and inspirational learning experiences in STEM, to identify and build on students’ early interest and engagement, increase STEM knowledge and awareness, engage them in interdisciplinary project-based STEM practices, and provide them with real-world application experiences to sustain their interest.
Resumo:
This PhD Thesis is about certain infinite-dimensional Grassmannian manifolds that arise naturally in geometry, representation theory and mathematical physics. From the physics point of view one encounters these infinite-dimensional manifolds when trying to understand the second quantization of fermions. The many particle Hilbert space of the second quantized fermions is called the fermionic Fock space. A typical element of the fermionic Fock space can be thought to be a linear combination of the configurations m particles and n anti-particles . Geometrically the fermionic Fock space can be constructed as holomorphic sections of a certain (dual)determinant line bundle lying over the so called restricted Grassmannian manifold, which is a typical example of an infinite-dimensional Grassmannian manifold one encounters in QFT. The construction should be compared with its well-known finite-dimensional analogue, where one realizes an exterior power of a finite-dimensional vector space as the space of holomorphic sections of a determinant line bundle lying over a finite-dimensional Grassmannian manifold. The connection with infinite-dimensional representation theory stems from the fact that the restricted Grassmannian manifold is an infinite-dimensional homogeneous (Kähler) manifold, i.e. it is of the form G/H where G is a certain infinite-dimensional Lie group and H its subgroup. A central extension of G acts on the total space of the dual determinant line bundle and also on the space its holomorphic sections; thus G admits a (projective) representation on the fermionic Fock space. This construction also induces the so called basic representation for loop groups (of compact groups), which in turn are vitally important in string theory / conformal field theory. The Thesis consists of three chapters: the first chapter is an introduction to the backround material and the other two chapters are individually written research articles. The first article deals in a new way with the well-known question in Yang-Mills theory, when can one lift the action of the gauge transformation group on the space of connection one forms to the total space of the Fock bundle in a compatible way with the second quantized Dirac operator. In general there is an obstruction to this (called the Mickelsson-Faddeev anomaly) and various geometric interpretations for this anomaly, using such things as group extensions and bundle gerbes, have been given earlier. In this work we give a new geometric interpretation for the Faddeev-Mickelsson anomaly in terms of differentiable gerbes (certain sheaves of categories) and central extensions of Lie groupoids. The second research article deals with the question how to define a Dirac-like operator on the restricted Grassmannian manifold, which is an infinite-dimensional space and hence not in the landscape of standard Dirac operator theory. The construction relies heavily on infinite-dimensional representation theory and one of the most technically demanding challenges is to be able to introduce proper normal orderings for certain infinite sums of operators in such a way that all divergences will disappear and the infinite sum will make sense as a well-defined operator acting on a suitable Hilbert space of spinors. This research article was motivated by a more extensive ongoing project to construct twisted K-theory classes in Yang-Mills theory via a Dirac-like operator on the restricted Grassmannian manifold.
Resumo:
Väitöskirjatutkimuksessa tarkastellaan Taiwanin politiikkaa ensimmäisen vaalien kautta tapahtuneen vallanvaihdon jälkeen (2000) yhteiskunnan rakenteellisen politisoitumisen näkökulmasta. Koska Taiwanilla siirryttiin verettömästi autoritaarisesta yksipuoluejärjestelmästä monipuoluejärjestelmään sitä on pidetty poliittisen muodonmuutoksen mallioppilaana. Aiempi optimismi Taiwanin demokratisoitumisen suhteen on sittemmin vaihtunut pessimismiin, pitkälti yhteiskunnan voimakkaasta politisoitumisesta johtuen. Tutkimuksessa haetaan selitystä tälle politisoitumiselle. Yhteiskunnan rakenteellisella politisoitumisella tarkoitetaan tilannetta, jossa ”poliittisen” alue kasvaa varsinaisia poliittisia instituutioita laajemmaksi. Rakenteellinen politisoituminen muuttuu helposti yhteiskunnalliseksi ongelmaksi, koska siitä usein seuraa normaalin poliittisen toiminnan (esim. lainsäädännän) jähmettyminen, yhteiskunnan jyrkkä jakautuminen, alhainen kynnys poliittisille konflikteille ja yleisen yhteiskunnallisen luottamuksen alentuminen. Toisin kuin esimerkiksi Itä-Euroopassa, Taiwanissa entinen valtapuolue ei romahtanut poliittisen avautumisen myötä vaan säilytti vahvan rakenteellisen asemansa. Kun valta vaihtui ensimmäisen kerran vaalien kautta, vanha valtapuolue ei ollut valmis luovuttamaan poliittisen järjestelmän ohjaksia käsistään. Alkoi vuosia kestänyt taistelu järjestelmän hallinnasta vanhan ja uuden valtapuolueen välillä, jossa yhteiskunta politisoitui voimakkaasti. Tutkimuksessa Taiwanin yhteiskunnan politisoituminen selitetään useiden rakenteellisten piirteiden yhteisvaikutuksen tuloksena. Tällaisia politisoitumista edistäviä rakentellisia piirteitä ovat hidas poliittinen muutos, joka säilytti vanhat poliittiset jakolinjat ja niihin liittyvät vahvat edut ja intressit; sopimaton perustuslaki; Taiwanin epäselvä kansainvälinen asema ja jakautunut identiteetti; sekä sosiaalinen rakenne, joka helpottaa ihmisten nopeaa mobilisointia poliittiisiin mielenilmauksiin. Tutkimuksessa kiinnitetään huomiota toistaiseksi vähän tutkittuun poliittiseen ilmiöön, joidenkin demokratisoituvien yhteiskuntien voimakkaaseen rakenteelliseen politisoitumiseen. Tutkimuksen pääasiallinen havainto on, että yksipuoluejärjestelmän demokratisoituminen kantaa sisällään rakenteellisen politisoitumisen siemenen, jos entinen valtapuolue ei romahda demokratisoitumisen myötä.
Resumo:
This study approaches the problem of poverty in the hinterlands of Northeast Brazil through the concept of structural violence, linking the environmental threats posed by climate change, especially those related to droughts, to the broader social struggles in the region. When discussions about potentials and rights are incorporated into the problematic of poverty, a deeper insight is obtained regarding the various factors behind the phenomenon. It is generally believed that climate change is affecting the already marginalized and poor more than those of higher social standing, and will increasingly do so in the future. The data for this study was collected during a three month field work in the states of Pernambuco and Paraíba in Northeast Brazil. The main methods used were semi-structured interviews and participant observation, including attending seminars concerning climate change on the field. The focus of the work is to compare both layman and expert perceptions on what climate change is about, and question the assumptions about its effects in the future, mainly that of increased numbers of ‘climate refugees’ or people forced to migrate due to changes in climate. The focus on droughts, as opposed to other manifestations of climate change, arises from the fact that droughts are not only phenomena that develop over a longer time span than floods or hurricanes, but is also due to the historical persistence of droughts in the region, and both the institutional and cultural linkages that have evolved around it. The instances of structural violence that are highlighted in this study; the drought industry, land use, and the social and power relations present in the region, including those between the civil society, the state and the private agribusiness sector, all work against a backdrop of symbolic and moral realms of value production, where relations between the different actors are being negotiated anew with the rise of the climate change discourse. The main theoretical framework of the study consists of Johan Galtung’s and Paul Farmer’s theory of structural violence, Ulrich Beck’s theory of the risk society, and James Scott’s theory of everyday peasant resistance.
Resumo:
Indian logic has a long history. It somewhat covers the domains of two of the six schools (darsanas) of Indian philosophy, namely, Nyaya and Vaisesika. The generally accepted definition of Indian logic over the ages is the science which ascertains valid knowledge either by means of six senses or by means of the five members of the syllogism. In other words, perception and inference constitute the subject matter of logic. The science of logic evolved in India through three ages: the ancient, the medieval and the modern, spanning almost thirty centuries. Advances in Computer Science, in particular, in Artificial Intelligence have got researchers in these areas interested in the basic problems of language, logic and cognition in the past three decades. In the 1980s, Artificial Intelligence has evolved into knowledge-based and intelligent system design, and the knowledge base and inference engine have become standard subsystems of an intelligent system. One of the important issues in the design of such systems is knowledge acquisition from humans who are experts in a branch of learning (such as medicine or law) and transferring that knowledge to a computing system. The second important issue in such systems is the validation of the knowledge base of the system i.e. ensuring that the knowledge is complete and consistent. It is in this context that comparative study of Indian logic with recent theories of logic, language and knowledge engineering will help the computer scientist understand the deeper implications of the terms and concepts he is currently using and attempting to develop.
Resumo:
We describe a compiler for the Flat Concurrent Prolog language on a message passing multiprocessor architecture. This compiler permits symbolic and declarative programming in the syntax of Guarded Horn Rules, The implementation has been verified and tested on the 64-node PARAM parallel computer developed by C-DAC (Centre for the Development of Advanced Computing, India), Flat Concurrent Prolog (FCP) is a logic programming language designed for concurrent programming and parallel execution, It is a process oriented language, which embodies dataflow synchronization and guarded-command as its basic control mechanisms. An identical algorithm is executed on every processor in the network, We assume regular network topologies like mesh, ring, etc, Each node has a local memory, The algorithm comprises of two important parts: reduction and communication, The most difficult task is to integrate the solutions of problems that arise in the implementation in a coherent and efficient manner. We have tested the efficacy of the compiler on various benchmark problems of the ICOT project that have been reported in the recent book by Evan Tick, These problems include Quicksort, 8-queens, and Prime Number Generation, The results of the preliminary tests are favourable, We are currently examining issues like indexing and load balancing to further optimize our compiler.
Resumo:
The aim of logic synthesis is to produce circuits which satisfy the given boolean function while meeting timing constraints and requiring the minimum silicon area. Logic synthesis involves two steps namely logic decomposition and technology mapping. Existing methods treat the two as separate operation. The traditional approach is to minimize the number of literals without considering the target technology during the decomposition phase. The decomposed expressions are then mapped on to the target technology to optimize the area, Timing optimization is carried out subsequently, A new approach which treats logic decomposition and technology maping as a single operation is presented. The logic decomposition is based on the parameters of the target technology. The area and timing optimization is carried out during logic decomposition phase itself. Results using MCNC circuits are presented to show that this method produces circuits which are 38% faster while requiring 14% increase in area.
Resumo:
Use of dipolar and quadrupolar couplings for quantum information processing (QIP) by nuclear magnetic resonance (NMR) is described. In these cases, instead of the individual spins being qubits, the 2(n) energy levels of the spin-system can be treated as an n-qubit system. It is demonstrated that QIP in such systems can be carried out using transition-selective pulses, in (CHCN)-C-3, (CH3CN)-C-13, Li-7 (I = 3/2) and Cs-133 (I = 7/2), oriented in liquid crystals yielding 2 and 3 qubit systems. Creation of pseudopure states, implementation of logic gates and arithmetic operations (half-adder and subtractor) have been carried out in these systems using transition-selective pulses.
Resumo:
Ergonomic design of products demands accurate human dimensions-anthropometric data. Manual measurement over live subjects, has several limitations like long time, required presence of subjects for every new measurement, physical contact etc. Hence the data currently available is limited and anthropometric data related to facial features is difficult to obtain. In this paper, we discuss a methodology to automatically detect facial features and landmarks from scanned human head models. Segmentation of face into meaningful patches corresponding to facial features is achieved by Watershed algorithms and Mathematical Morphology tools. Many Important physiognomical landmarks are identified heuristically.
Resumo:
The hierarchial structure and mathematical property of the simplified Navier-Stokesequations (SNSE) are studied for viscous flow over a sphere and a jet of compressible flu-id. All kinds of the hierarchial SNSE can be divided into three types according to theirmathematical property and also into five groups according to their physical content. Amultilayers structure model for viscous shear flow with a main stream direction is pre-sented. For the example of viscous incompressible flow over a flat plate there existthree layers for both the separated flow and the attached flow; the character of thetransition from the three layers of attached flow to those of separated flow is elucidated.A concept of transition layer being situated between the viscous layer and inviscidlayer is introduced. The transition layer features the interaction between viscous flow andinviscid flow. The inner-outer-layers-matched SNSE proposed by the present author inthe past is developed into the layers matched (LsM)-SNSE.
Resumo:
This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
This paper discusses road damage caused by heavy commercial vehicles. Chapter 1 presents some important terminology and a brief historical review of road construction and vehicle-road interaction, from ancient times to the present day. The main types of vehicle-generated road damage, and the methods that are used by pavement engineers to analyze them are discussed in Chapter 2. Attention is also given to the main features of the response of road surfaces to vehicle loads and mathematical models that have been developed to predict road response. Chapter 3 reviews the effects on road damage of vehicle features which can be studied without consideration of vehicle dynamics. These include gross vehicle weight, axle and tire configurations, tire contact conditions and static load sharing in axle group suspensions. The dynamic tire forces generated by heavy vehicles are examined in Chapter 4. The discussion includes their simulation and measurement, their principal characteristics, the effects of tires and suspension design on dynamic forces, and the potential benefits of using advanced suspensions for minimizing dynamic tire forces. Chapter 5 discusses methods for estimating the effects of dynamic tire forces on road damage. The two main approaches are either to examine the statistics of the forces themselves; or to calculate the response of a pavement model to the forces, and to calculate the resulting wear using a material damage model. The issues involved in assessing vehicles for 'road friendliness' are discussed in Chapter 6. Possible assessment methods include measuring strains in an instrumented pavement traversed by the vehicle, measuring dynamic tire forces, or measuring vehicle parameters such as the 'natural frequency' and 'damping ratio'. Each of these measurements involves different assumptions and analysis methods for converting the results into some measure of road damage. Chapter 7 includes a summary of the main conclusions of the paper and recommendations for tire and suspension design, road design and construction, and for vehicle regulations.