556 resultados para Predicate encryption
Resumo:
Executive summary
Digital systems have transformed, and will continue to transform, our world. Supportive government policy, a strong research base and a history of industrial success make the UK particularly well-placed to realise the benefits of the emerging digital society. These benefits have already been substantial, but they remain at risk. Protecting the benefits and minimising the risks requires reliable and robust cybersecurity, underpinned by a strong research and translation system.
Trust is essential for growing and maintaining participation in the digital society. Organisations earn trust by acting in a trustworthy manner: building systems that are reliable and secure, treating people, their privacy and their data with respect, and providing credible and comprehensible information to help people understand how secure they are.
Resilience, the ability to function, adapt, grow, learn and transform under stress or in the face of shocks, will help organisations deliver systems that are reliable and secure. Resilient organisations can better protect their customers, provide more useful products and services, and earn people’s trust.
Research and innovation in industry and academia will continue to make important contributions to creating this resilient and trusted digital environment. Research can illuminate how best to build, assess and improve digital systems, integrating insights from different disciplines, sectors and around the globe. It can also generate advances to help cybersecurity keep up with the continued evolution of cyber risks.
Translation of innovative ideas and approaches from research will create a strong supply of reliable, proven solutions to difficult to predict cybersecurity risks. This is best achieved by maximising the diversity and number of innovations that see the light of day as products.
Policy, practice and research will all need to adapt. The recommendations made in this report seek to set up a trustworthy, self-improving and resilient digital environment that can thrive in the face of unanticipated threats, and earn the trust people place in it.
Innovation and research will be particularly important to the UK’s economy as it establishes a new relationship with the EU. Cybersecurity delivers important economic benefits, both by underpinning the digital foundations of UK business and trade and also through innovation that feeds directly into growth. The findings of this report will be relevant regardless of how the UK’s relationship to the EU changes.
Headline recommendations
● Trust: Governments must commit to preserving the robustness of encryption, including end-to-end encryption, and promoting its widespread use. Encryption is a foundational security technology that is needed to build user trust, improve security standards and fully realise the benefits of digital systems.
● Resilience: Government should commission an independent review of the UK’s future cybersecurity needs, focused on the institutional structures needed to support resilient and trustworthy digital systems in the medium and longer term. A self-improving, resilient digital environment will need to be guided and governed by institutions that are transparent, expert and have a clear and widely-understood remit.
● Research: A step change in cybersecurity research and practice should be pursued; it will require a new approach to research, focused on identifying ambitious high-level goals and enabling excellent researchers to pursue those ambitions. This would build on the UK's existing strengths in many aspects of cybersecurity research and ultimately help build a resilient and trusted digital sector based on excellent research and world-class expertise.
● Translation: The UK should promote a free and unencumbered flow of cybersecurity ideas from research to practical use and support approaches that have public benefits beyond their short term financial return. The unanticipated nature of future cyber threats means that a diverse set of cybersecurity ideas and approaches will be needed to build resilience and adaptivity. Many of the most valuable ideas will have broad security benefits for the public, beyond any direct financial returns.
Resumo:
As the development of a viable quantum computer nears, existing widely used public-key cryptosystems, such as RSA, will no longer be secure. Thus, significant effort is being invested into post-quantum cryptography (PQC). Lattice-based cryptography (LBC) is one such promising area of PQC, which offers versatile, efficient, and high performance security services. However, the vulnerabilities of these implementations against side-channel attacks (SCA) remain significantly understudied. Most, if not all, lattice-based cryptosystems require noise samples generated from a discrete Gaussian distribution, and a successful timing analysis attack can render the whole cryptosystem broken, making the discrete Gaussian sampler the most vulnerable module to SCA. This research proposes countermeasures against timing information leakage with FPGA-based designs of the CDT-based discrete Gaussian samplers with constant response time, targeting encryption and signature scheme parameters. The proposed designs are compared against the state-of-the-art and are shown to significantly outperform existing implementations. For encryption, the proposed sampler is 9x faster in comparison to the only other existing time-independent CDT sampler design. For signatures, the first time-independent CDT sampler in hardware is proposed.
Resumo:
This project is aimed at making comparison between current existing Internet- of-Things (IoT) platforms, SensibleThings (ST) and Global Sensors Networks (GSN). Project can be served as a further work of platforms’ investigation. Comparing and learning from each other aim to contribute to the improvement of future platforms development. Detailed comparison is mainly with the respect of platform feature, communication and data present-frequency performance under stress, and platform node scalability performance on one limited device. Study is conducted through developing applications on each platform, and making measuring performance under the same condition in household network environment. So far, all these respects have had results and been concluded. Qualitatively comparing, GSN performs better in the facets of node’s swift development and deployment, data management, node subscription and connection retry mechanism. Whereas, ST is superior in respects of network package encryption, platform reliability, session initializing latency, and degree of developing freedom. In quantitative comparison, nodes on GSN has better data push pressure resistence while ST nodes works with lower session latency. In terms of data present-frequency, ST node can reach higher updating frequency than GSN node. In the aspect of node sclability on one limited device, ST nodes take the advantage in averagely lower latency than GSN node when nodes number is less than 15 on limited device. But due to sharing mechanism of GSN, on one limited device, it's nodes shows more scalable if platform nodes have similar job.
Resumo:
Contemporary integrated circuits are designed and manufactured in a globalized environment leading to concerns of piracy, overproduction and counterfeiting. One class of techniques to combat these threats is circuit obfuscation which seeks to modify the gate-level (or structural) description of a circuit without affecting its functionality in order to increase the complexity and cost of reverse engineering. Most of the existing circuit obfuscation methods are based on the insertion of additional logic (called “key gates”) or camouflaging existing gates in order to make it difficult for a malicious user to get the complete layout information without extensive computations to determine key-gate values. However, when the netlist or the circuit layout, although camouflaged, is available to the attacker, he/she can use advanced logic analysis and circuit simulation tools and Boolean SAT solvers to reveal the unknown gate-level information without exhaustively trying all the input vectors, thus bringing down the complexity of reverse engineering. To counter this problem, some ‘provably secure’ logic encryption algorithms that emphasize methodical selection of camouflaged gates have been proposed previously in literature [1,2,3]. The contribution of this paper is the creation and simulation of a new layout obfuscation method that uses don't care conditions. We also present proof-of-concept of a new functional or logic obfuscation technique that not only conceals, but modifies the circuit functionality in addition to the gate-level description, and can be implemented automatically during the design process. Our layout obfuscation technique utilizes don’t care conditions (namely, Observability and Satisfiability Don’t Cares) inherent in the circuit to camouflage selected gates and modify sub-circuit functionality while meeting the overall circuit specification. Here, camouflaging or obfuscating a gate means replacing the candidate gate by a 4X1 Multiplexer which can be configured to perform all possible 2-input/ 1-output functions as proposed by Bao et al. [4]. It is important to emphasize that our approach not only obfuscates but alters sub-circuit level functionality in an attempt to make IP piracy difficult. The choice of gates to obfuscate determines the effort required to reverse engineer or brute force the design. As such, we propose a method of camouflaged gate selection based on the intersection of output logic cones. By choosing these candidate gates methodically, the complexity of reverse engineering can be made exponential, thus making it computationally very expensive to determine the true circuit functionality. We propose several heuristic algorithms to maximize the RE complexity based on don’t care based obfuscation and methodical gate selection. Thus, the goal of protecting the design IP from malicious end-users is achieved. It also makes it significantly harder for rogue elements in the supply chain to use, copy or replicate the same design with a different logic. We analyze the reverse engineering complexity by applying our obfuscation algorithm on ISCAS-85 benchmarks. Our experimental results indicate that significant reverse engineering complexity can be achieved at minimal design overhead (average area overhead for the proposed layout obfuscation methods is 5.51% and average delay overhead is about 7.732%). We discuss the strengths and limitations of our approach and suggest directions that may lead to improved logic encryption algorithms in the future. References: [1] R. Chakraborty and S. Bhunia, “HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493–1502, 2009. [2] J. A. Roy, F. Koushanfar, and I. L. Markov, “EPIC: Ending Piracy of Integrated Circuits,” in 2008 Design, Automation and Test in Europe, 2008, pp. 1069–1074. [3] J. Rajendran, M. Sam, O. Sinanoglu, and R. Karri, “Security Analysis of Integrated Circuit Camouflaging,” ACM Conference on Computer Communications and Security, 2013. [4] Bao Liu, Wang, B., "Embedded reconfigurable logic for ASIC design obfuscation against supply chain attacks,"Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014.
Resumo:
Imagine being told that your wage was going to be cut in half. Well, that’s what’s soon going to happen to those who make money from Bitcoin mining, the process of earning the online currency Bitcoin. The current expected date for this change is 11 July 2016. Many see this as the day when Bitcoin prices will rocket and when Bitcoin owners could make a great deal of money. Others see it as the start of a Bitcoin crash. At present no one quite knows which way it will go. Bitcoin was created in 2009 by someone known as Satoshi Nakamoto, borrowing from a whole lot of research methods. It is a cryptocurrency, meaning it uses digital encryption techniques to create bitcoins and secure financial transactions. It doesn’t need a central government or organisation to regulate it, nor a broker to manage payments. Conventional currencies usually have a central bank that creates money and controls its supply. Bitcoin is instead created when individuals “mine” for it by using their computers to perform complex calculations through special software. The algorithm behind Bitcoin is designed to limit the number of bitcoins that can ever be created. All Bitcoin transactions are recorded on a public database known as a blockchain. Every time someone mines for Bitcoin, it is recorded with a new block that is transmitted to every Bitcoin app across the network, like a bank updating its online records.
Resumo:
Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.
Resumo:
Many existing encrypted Internet protocols leak information through packet sizes and timing. Though seemingly innocuous, prior work has shown that such leakage can be used to recover part or all of the plaintext being encrypted. The prevalence of encrypted protocols as the underpinning of such critical services as e-commerce, remote login, and anonymity networks and the increasing feasibility of attacks on these services represent a considerable risk to communications security. Existing mechanisms for preventing traffic analysis focus on re-routing and padding. These prevention techniques have considerable resource and overhead requirements. Furthermore, padding is easily detectable and, in some cases, can introduce its own vulnerabilities. To address these shortcomings, we propose embedding real traffic in synthetically generated encrypted cover traffic. Novel to our approach is our use of realistic network protocol behavior models to generate cover traffic. The observable traffic we generate also has the benefit of being indistinguishable from other real encrypted traffic further thwarting an adversary's ability to target attacks. In this dissertation, we introduce the design of a proxy system called TrafficMimic that implements realistic cover traffic tunneling and can be used alone or integrated with the Tor anonymity system. We describe the cover traffic generation process including the subtleties of implementing a secure traffic generator. We show that TrafficMimic cover traffic can fool a complex protocol classification attack with 91% of the accuracy of real traffic. TrafficMimic cover traffic is also not detected by a binary classification attack specifically designed to detect TrafficMimic. We evaluate the performance of tunneling with independent cover traffic models and find that they are comparable, and, in some cases, more efficient than generic constant-rate defenses. We then use simulation and analytic modeling to understand the performance of cover traffic tunneling more deeply. We find that we can take measurements from real or simulated traffic with no tunneling and use them to estimate parameters for an accurate analytic model of the performance impact of cover traffic tunneling. Once validated, we use this model to better understand how delay, bandwidth, tunnel slowdown, and stability affect cover traffic tunneling. Finally, we take the insights from our simulation study and develop several biasing techniques that we can use to match the cover traffic to the real traffic while simultaneously bounding external information leakage. We study these bias methods using simulation and evaluate their security using a Bayesian inference attack. We find that we can safely improve performance with biasing while preventing both traffic analysis and defense detection attacks. We then apply these biasing methods to the real TrafficMimic implementation and evaluate it on the Internet. We find that biasing can provide 3-5x improvement in bandwidth for bulk transfers and 2.5-9.5x speedup for Web browsing over tunneling without biasing.
Resumo:
Trust is a pervasive phenomenon in our lives. We trust our family members and lovers, our physicians and teachers, our politicians and even strangers on the street. Trust has instrumental value for us, but at the same time it is often accompanied by risk. This is the reason why it is important to distinguish trust that is warranted or justified from blind trust. In order to answer the question how trust is justified, however, it is crucial to know exactly what is the fundamental nature of trust. In the paper, I reconstruct three accounts of trust that operate with the assumption that trust is fundamentally a mental state – the cognitivist account, the voluntaristic account and the affect-based account. I argue that all of these accounts make reference to deeply held intuitions about trust that are incompatible with each other. As a solution to this unfortunate dialectical situation, I suggest to give up the assumption that trust is primarily a mental state. Instead, I argue for a position according to which trust is best understood as a two-place predicate that characterizes a specific relationship in which we can stand to each other.
Resumo:
This document presents GEmSysC, an unified cryptographic API for embedded systems. Software layers implementing this API can be built over existing libraries, allowing embedded software to access cryptographic functions in a consistent way that does not depend on the underlying library. The API complies to good practices for API design and good practices for embedded software development and took its inspiration from other cryptographic libraries and standards. The main inspiration for creating GEmSysC was the CMSIS-RTOS standard, which defines an unified API for embedded software in an implementation-independent way, but targets operating systems instead of cryptographic functions. GEmSysC is made of a generic core and attachable modules, one for each cryptographic algorithm. This document contains the specification of the core of GEmSysC and three of its modules: AES, RSA and SHA-256. GEmSysC was built targeting embedded systems, but this does not restrict its use only in such systems – after all, embedded systems are just very limited computing devices. As a proof of concept, two implementations of GEmSysC were made. One of them was built over wolfSSL, which is an open source library for embedded systems. The other was built over OpenSSL, which is open source and a de facto standard. Unlike wolfSSL, OpenSSL does not specifically target embedded systems. The implementation built over wolfSSL was evaluated in a Cortex- M3 processor with no operating system while the implementation built over OpenSSL was evaluated on a personal computer with Windows 10 operating system. This document displays test results showing GEmSysC to be simpler than other libraries in some aspects. These results have shown that both implementations incur in little overhead in computation time compared to the cryptographic libraries themselves. The overhead of the implementation has been measured for each cryptographic algorithm and is between around 0% and 0.17% for the implementation over wolfSSL and between 0.03% and 1.40% for the one over OpenSSL. This document also presents the memory costs for each implementation.
Resumo:
Essiivi on suomen kielen sijamuoto, jota käytetään nykykielessä erilaisten olotilojen ilmaisemiseen (Hän on opettajana ~ sairaana). Se on taustaltaan lokatiivi, jonka paikanilmaisutehtävä näkyy edelleen erilaisissa kiteytymissä (kotona, luona). Essiivi kuuluu myös ajankohdanilmaisujärjestelmään (lauantaina, ensi vuotena). Tässä tutkimuksessa kuvaan essiivisijan käyttöä erityisesti olotilan ilmaisemisen näkökulmasta. Kiinnitän huomiota sijan merkitykseen ja pyrin tarkentamaan sen lauseopillista kuvausta hyödyntämällä yleislingvistis-kielitypologisia olotilanilmausten kategoriointitapoja. Tutkimukseni aineisto on kolmijakoinen. Pääaineistonani on Lauseopin arkisto, jota koskevan haun tuloksena olen saanut 9096 essiivisijaisen sanan sisältävää lauseketta virkekonteksteineen. Täydennän tätä aineistoa käytöstä poimituilla esimerkeillä ja intuitioon pohjautuvilla ns. selvillä tapauksilla. Tutkimusmenetelmänä on pääosin aineiston kvalitatiivinen tarkastelu, mutta tarkastelen essiiviä myös teoreettisemmin sääntöjärjestelmän näkökulmasta. Tärkeimpänä lähtökohtanani on tuottaa tutkimustietoa, joka kykenee käymään vuoropuhelua kielitypologisen tutkimuksen kanssa siten, että tutkimus kuitenkin tapahtuu yksittäiskielen ehdoilla. Erityisesti semantiikan kuvauksessa tarkastelutapani on kognitiivisen kielitieteen mukaisesti orientoitunut. Essiivin tehtävänä on pelkistetyssä nominaalilauseessa ilmaista olotilan tilapäisyyttä ja muutoksellisuutta, kun taas merkitystä kantavan verbin yhteydessä essiivisijainen olotilan ilmaus saa usein muita merkityksiä ja voi ilmaista myös vaihtoehtoisen tai tilanteen syynä olevan olotilan. Aspektinäkökulmasta essiivi tuo tilaan toimin¬nallisia piirteitä, ja se myös muodostaa selvästi omanlaisensa reviirin suhteessa muihin olotilaa ja sen lähimerkityksiä ilmaiseviin kieli- ja johto-opin kategorioihin. Syntaktisen luokittelun näkökulmasta essiivisijaiset predikoivat olotilanilmaukset jakautuvat kolmeen pääryhmään sen perusteella, millainen niiden suhde lauseen predikaattiin on. Ne voivat toimia kopulalauseen nominaalipredikaatteina, merkitystä kantavien verbien täydennyksinä ja sekundaarisina predikaatteina, jotka ovat määritteitä. Lisäksi essiivisijaiset olotilan ilmaukset voivat toimia predikoimattomina lauseadverbiaaleina. Tutkimus on essiivisijan laaja aineistopohjainen kuvaus, joka osoittaa, että yleislingvistis-kielitypologinen lähestymistapa sopii suomen essiivin kuvaukseen. Alakategorioinnissa joudutaan kuitenkin turvautumaan semantiikkaan. Myös pääkategorioiden jatkumomaisuus on hyväksyttävä. Tulosten perusteella on mahdollista käydä keskustelua siitä, olisiko predikoivien lauseenjäsenten syntaktista luokittelua mahdollista uudistaa. Lisäksi tutkimus avaa uusia kysymyksiä tutkittavaksi erityisesti sijojen käyttöä vertailevasta näkökulmasta.
Resumo:
Secure computation involves multiple parties computing a common function while keeping their inputs private, and is a growing field of cryptography due to its potential for maintaining privacy guarantees in real-world applications. However, current secure computation protocols are not yet efficient enough to be used in practice. We argue that this is due to much of the research effort being focused on generality rather than specificity. Namely, current research tends to focus on constructing and improving protocols for the strongest notions of security or for an arbitrary number of parties. However, in real-world deployments, these security notions are often too strong, or the number of parties running a protocol would be smaller. In this thesis we make several steps towards bridging the efficiency gap of secure computation by focusing on constructing efficient protocols for specific real-world settings and security models. In particular, we make the following four contributions: - We show an efficient (when amortized over multiple runs) maliciously secure two-party secure computation (2PC) protocol in the multiple-execution setting, where the same function is computed multiple times by the same pair of parties. - We improve the efficiency of 2PC protocols in the publicly verifiable covert security model, where a party can cheat with some probability but if it gets caught then the honest party obtains a certificate proving that the given party cheated. - We show how to optimize existing 2PC protocols when the function to be computed includes predicate checks on its inputs. - We demonstrate an efficient maliciously secure protocol in the three-party setting.
Resumo:
Nell’articolo sono raccolte e discusse le varianti con εἰμί più participio individuate nei testimoni della tradizione testuale del Pentateuco dei LXX; sono considerate anche le varianti con γίνομαι ma solo quelle concerneti il predicato nominale. La finalità è di osservare se vi siano tracce dell’affermazione della perifrasi nelle varianti, eventuali concentrazioni di frequenza insieme a sfumature d’uso, con particolare attenzione alla datazione dei testimoni dove appaiono. Un paragrafo è riservato anche alle varianti di costruzioni con predicato nominale (copula + aggettivo) rispetto a un verbo finito.
Resumo:
Bilinear pairings can be used to construct cryptographic systems with very desirable properties. A pairing performs a mapping on members of groups on elliptic and genus 2 hyperelliptic curves to an extension of the finite field on which the curves are defined. The finite fields must, however, be large to ensure adequate security. The complicated group structure of the curves and the expensive field operations result in time consuming computations that are an impediment to the practicality of pairing-based systems. The Tate pairing can be computed efficiently using the ɳT method. Hardware architectures can be used to accelerate the required operations by exploiting the parallelism inherent to the algorithmic and finite field calculations. The Tate pairing can be performed on elliptic curves of characteristic 2 and 3 and on genus 2 hyperelliptic curves of characteristic 2. Curve selection is dependent on several factors including desired computational speed, the area constraints of the target device and the required security level. In this thesis, custom hardware processors for the acceleration of the Tate pairing are presented and implemented on an FPGA. The underlying hardware architectures are designed with care to exploit available parallelism while ensuring resource efficiency. The characteristic 2 elliptic curve processor contains novel units that return a pairing result in a very low number of clock cycles. Despite the more complicated computational algorithm, the speed of the genus 2 processor is comparable. Pairing computation on each of these curves can be appealing in applications with various attributes. A flexible processor that can perform pairing computation on elliptic curves of characteristic 2 and 3 has also been designed. An integrated hardware/software design and verification environment has been developed. This system automates the procedures required for robust processor creation and enables the rapid provision of solutions for a wide range of cryptographic applications.
Resumo:
Concerns have been raised in the past several years that introducing new transport protocols on the Internet has be- come increasingly difficult, not least because there is no agreed-upon way for a source end host to find out if a trans- port protocol is supported all the way to a destination peer. A solution to a similar problem—finding out support for IPv6—has been proposed and is currently being deployed: the Happy Eyeballs (HE) mechanism. HE has also been proposed as an efficient way for an application to select an appropriate transport protocol. Still, there are few, if any, performance evaluations of transport HE. This paper demonstrates that transport HE could indeed be a feasible solution to the transport support problem. The paper evaluates HE between TCP and SCTP using TLS encrypted and unencrypted traffic, and shows that although there is indeed a cost in terms of CPU load to introduce HE, the cost is rel- atively small, especially in comparison with the cost of using TLS encryption. Moreover, our results suggest that HE has a marginal impact on memory usage. Finally, by introduc- ing caching of previous connection attempts, the additional cost of transport HE could be significantly reduced.
Resumo:
This work is a description of Tajio, a Western Malayo-Polynesian language spoken in Central Sulawesi, Indonesia. It covers the essential aspects of Tajio grammar without being exhaustive. Tajio has a medium sized phoneme inventory consisting of twenty consonants and five vowels. The language does not have lexical (word) stress; rather, it has a phrasal accent. This phrasal accent regularly occurs on the penultimate syllable of an intonational phrase, rendering this syllable auditorily prominent through a pitch rise. Possible syllable structures in Tajio are (C)V(C). CVN structures are allowed as closed syllables, but CVN syllables in word-medial position are not frequent. As in other languages in the area, the only sequence of consonants allowed in native Tajio words are sequences of nasals followed by a homorganic obstruent. The homorganic nasal-obstruent sequences found in Tajio can occur word-initially and word-medially but never in word-final position. As in many Austronesian languages, word class classification in Tajio is not straightforward. The classification of words in Tajio must be carried out on two levels: the morphosyntactic level and the lexical level. The open word classes in Tajio consist of nouns and verbs. Verbs are further divided into intransitive verbs (dynamic intransitive verbs and statives) and dynamic transitive verbs. Based on their morphological potential, lexical roots in Tajio fall into three classes: single-class roots, dual-class roots and multi-class roots. There are two basic transitive constructions in Tajio: Actor Voice and Undergoer Voice, where the actor or undergoer argument respectively serves as subjects. It shares many characteristics with symmetrical voice languages, yet it is not fully symmetric, as arguments in AV and UV are not equally marked. Neither subjects nor objects are marked in AV constructions. In UV constructions, however, subjects are unmarked while objects are marked either by prefixation or clitization. Evidence from relativization, control and raising constructions supports the analysis that AV and UV are in fact transitive, with subject arguments and object arguments behaving alike in both voices. Only the subject can be relativized, controlled, raised or function as the implicit subject of subjectless adverbial clauses. In contrast, the objects of AV and UV constructions do not exhibit these features. Tajio is a predominantly head-marking language with basic A-V-O constituent order. V and O form a constituent, and the subject can either precede or follow this complex. Thus, basic word order is S-V-O or V-O-S. Subject, as well as non-subject arguments, may be omitted when contextually specified. Verbs are marked for voice and mood, the latter of which is is obligatory. The two values distinguished are realis and non-realis. Depending on the type of predicate involved in clause formation, three clause types can be distinguished: verbal clauses, existential clauses and non-verbal clauses. Tajio has a small number of multi-verbal structures that appear to qualify as serial verb constructions. SVCs in Tajio always include a motion verb or a directional.