950 resultados para Suzumura consistency
Resumo:
Formal correctness of complex multi-party network protocols can be difficult to verify. While models of specific fixed compositions of agents can be checked against design constraints, protocols which lend themselves to arbitrarily many compositions of agents-such as the chaining of proxies or the peering of routers-are more difficult to verify because they represent potentially infinite state spaces and may exhibit emergent behaviors which may not materialize under particular fixed compositions. We address this challenge by developing an algebraic approach that enables us to reduce arbitrary compositions of network agents into a behaviorally-equivalent (with respect to some correctness property) compact, canonical representation, which is amenable to mechanical verification. Our approach consists of an algebra and a set of property-preserving rewrite rules for the Canonical Homomorphic Abstraction of Infinite Network protocol compositions (CHAIN). Using CHAIN, an expression over our algebra (i.e., a set of configurations of network protocol agents) can be reduced to another behaviorally-equivalent expression (i.e., a smaller set of configurations). Repeated applications of such rewrite rules produces a canonical expression which can be checked mechanically. We demonstrate our approach by characterizing deadlock-prone configurations of HTTP agents, as well as establishing useful properties of an overlay protocol for scheduling MPEG frames, and of a protocol for Web intra-cache consistency.
Resumo:
A method for reconstruction of 3D polygonal models from multiple views is presented. The method uses sampling techniques to construct a texture-mapped semi-regular polygonal mesh of the object in question. Given a set of views and segmentation of the object in each view, constructive solid geometry is used to build a visual hull from silhouette prisms. The resulting polygonal mesh is simplified and subdivided to produce a semi-regular mesh. Regions of model fit inaccuracy are found by projecting the reference images onto the mesh from different views. The resulting error images for each view are used to compute a probability density function, and several points are sampled from it. Along the epipolar lines corresponding to these sampled points, photometric consistency is evaluated. The mesh surface is then pulled towards the regions of higher photometric consistency using free-form deformations. This sampling-based approach produces a photometrically consistent solution in much less time than possible with previous multi-view algorithms given arbitrary camera placement.
Resumo:
Programmers of parallel processes that communicate through shared globally distributed data structures (DDS) face a difficult choice. Either they must explicitly program DDS management, by partitioning or replicating it over multiple distributed memory modules, or be content with a high latency coherent (sequentially consistent) memory abstraction that hides the DDS' distribution. We present Mermera, a new formalism and system that enable a smooth spectrum of noncoherent shared memory behaviors to coexist between the above two extremes. Our approach allows us to define known noncoherent memories in a new simple way, to identify new memory behaviors, and to characterize generic mixed-behavior computations. The latter are useful for programming using multiple behaviors that complement each others' advantages. On the practical side, we show that the large class of programs that use asynchronous iterative methods (AIM) can run correctly on slow memory, one of the weakest, and hence most efficient and fault-tolerant, noncoherence conditions. An example AIM program to solve linear equations, is developed to illustrate: (1) the need for concurrently mixing memory behaviors, and, (2) the performance gains attainable via noncoherence. Other program classes tolerate weak memory consistency by synchronizing in such a way as to yield executions indistinguishable from coherent ones. AIM computations on noncoherent memory yield noncoherent, yet correct, computations. We report performance data that exemplifies the potential benefits of noncoherence, in terms of raw memory performance, as well as application speed.
Resumo:
Formal tools like finite-state model checkers have proven useful in verifying the correctness of systems of bounded size and for hardening single system components against arbitrary inputs. However, conventional applications of these techniques are not well suited to characterizing emergent behaviors of large compositions of processes. In this paper, we present a methodology by which arbitrarily large compositions of components can, if sufficient conditions are proven concerning properties of small compositions, be modeled and completely verified by performing formal verifications upon only a finite set of compositions. The sufficient conditions take the form of reductions, which are claims that particular sequences of components will be causally indistinguishable from other shorter sequences of components. We show how this methodology can be applied to a variety of network protocol applications, including two features of the HTTP protocol, a simple active networking applet, and a proposed web cache consistency algorithm. We also doing discuss its applicability to framing protocol design goals and to representing systems which employ non-model-checking verification methodologies. Finally, we briefly discuss how we hope to broaden this methodology to more general topological compositions of network applications.
Resumo:
In research areas involving mathematical rigor, there are numerous benefits to adopting a formal representation of models and arguments: reusability, automatic evaluation of examples, and verification of consistency and correctness. However, broad accessibility has not been a priority in the design of formal verification tools that can provide these benefits. We propose a few design criteria to address these issues: a simple, familiar, and conventional concrete syntax that is independent of any environment, application, or verification strategy, and the possibility of reducing workload and entry costs by employing features selectively. We demonstrate the feasibility of satisfying such criteria by presenting our own formal representation and verification system. Our system’s concrete syntax overlaps with English, LATEX and MediaWiki markup wherever possible, and its verifier relies on heuristic search techniques that make the formal authoring process more manageable and consistent with prevailing practices. We employ techniques and algorithms that ensure a simple, uniform, and flexible definition and design for the system, so that it easy to augment, extend, and improve.
Resumo:
In research areas involving mathematical rigor, there are numerous benefits to adopting a formal representation of models and arguments: reusability, automatic evaluation of examples, and verification of consistency and correctness. However, accessibility has not been a priority in the design of formal verification tools that can provide these benefits. In earlier work [30] we attempt to address this broad problem by proposing several specific design criteria organized around the notion of a natural context: the sphere of awareness a working human user maintains of the relevant constructs, arguments, experiences, and background materials necessary to accomplish the task at hand. In this report we evaluate our proposed design criteria by utilizing within the context of novel research a formal reasoning system that is designed according to these criteria. In particular, we consider how the design and capabilities of the formal reasoning system that we employ influence, aid, or hinder our ability to accomplish a formal reasoning task – the assembly of a machine-verifiable proof pertaining to the NetSketch formalism. NetSketch is a tool for the specification of constrained-flow applications and the certification of desirable safety properties imposed thereon. NetSketch is conceived to assist system integrators in two types of activities: modeling and design. It provides capabilities for compositional analysis based on a strongly-typed domain-specific language (DSL) for describing and reasoning about constrained-flow networks and invariants that need to be enforced thereupon. In a companion paper [13] we overview NetSketch, highlight its salient features, and illustrate how it could be used in actual applications. In this paper, we define using a machine-readable syntax major parts of the formal system underlying the operation of NetSketch, along with its semantics and a corresponding notion of validity. We then provide a proof of soundness for the formalism that can be partially verified using a lightweight formal reasoning system that simulates natural contexts. A traditional presentation of these definitions and arguments can be found in the full report on the NetSketch formalism [12].
Resumo:
In work that involves mathematical rigor, there are numerous benefits to adopting a representation of models and arguments that can be supplied to a formal reasoning or verification system: reusability, automatic evaluation of examples, and verification of consistency and correctness. However, accessibility has not been a priority in the design of formal verification tools that can provide these benefits. In earlier work [Lap09a], we attempt to address this broad problem by proposing several specific design criteria organized around the notion of a natural context: the sphere of awareness a working human user maintains of the relevant constructs, arguments, experiences, and background materials necessary to accomplish the task at hand. This work expands one aspect of the earlier work by considering more extensively an essential capability for any formal reasoning system whose design is oriented around simulating the natural context: native support for a collection of mathematical relations that deal with common constructs in arithmetic and set theory. We provide a formal definition for a context of relations that can be used to both validate and assist formal reasoning activities. We provide a proof that any algorithm that implements this formal structure faithfully will necessary converge. Finally, we consider the efficiency of an implementation of this formal structure that leverages modular implementations of well-known data structures: balanced search trees and transitive closures of hypergraphs.
Resumo:
We introduce a method for recovering the spatial and temporal alignment between two or more views of objects moving over a ground plane. Existing approaches either assume that the streams are globally synchronized, so that only solving the spatial alignment is needed, or that the temporal misalignment is small enough so that exhaustive search can be performed. In contrast, our approach can recover both the spatial and temporal alignment. We compute for each trajectory a number of interesting segments, and we use their description to form putative matches between trajectories. Each pair of corresponding interesting segments induces a temporal alignment, and defines an interval of common support across two views of an object that is used to recover the spatial alignment. Interesting segments and their descriptors are defined using algebraic projective invariants measured along the trajectories. Similarity between interesting segments is computed taking into account the statistics of such invariants. Candidate alignment parameters are verified checking the consistency, in terms of the symmetric transfer error, of all the putative pairs of corresponding interesting segments. Experiments are conducted with two different sets of data, one with two views of an outdoor scene featuring moving people and cars, and one with four views of a laboratory sequence featuring moving radio-controlled cars.
Resumo:
A learning based framework is proposed for estimating human body pose from a single image. Given a differentiable function that maps from pose space to image feature space, the goal is to invert the process: estimate the pose given only image features. The inversion is an ill-posed problem as the inverse mapping is a one to many process. Hence multiple solutions exist, and it is desirable to restrict the solution space to a smaller subset of feasible solutions. For example, not all human body poses are feasible due to anthropometric constraints. Since the space of feasible solutions may not admit a closed form description, the proposed framework seeks to exploit machine learning techniques to learn an approximation that is smoothly parameterized over such a space. One such technique is Gaussian Process Latent Variable Modelling. Scaled conjugate gradient is then used find the best matching pose in the space of feasible solutions when given an input image. The formulation allows easy incorporation of various constraints, e.g. temporal consistency and anthropometric constraints. The performance of the proposed approach is evaluated in the task of upper-body pose estimation from silhouettes and compared with the Specialized Mapping Architecture. The estimation accuracy of the Specialized Mapping Architecture is at least one standard deviation worse than the proposed approach in the experiments with synthetic data. In experiments with real video of humans performing gestures, the proposed approach produces qualitatively better estimation results.
Resumo:
The human urge to represent the three-dimensional world using two-dimensional pictorial representations dates back at least to Paleolithic times. Artists from ancient to modern times have struggled to understand how a few contours or color patches on a flat surface can induce mental representations of a three-dimensional scene. This article summarizes some of the recent breakthroughs in scientifically understanding how the brain sees that shed light on these struggles. These breakthroughs illustrate how various artists have intuitively understand paradoxical properties about how the brain sees, and have used that understanding to create great art. These paradoxical properties arise from how the brain forms the units of conscious visual perception; namely, representations of three-dimensional boundaries and surfaces. Boundaries and surfaces are computed in parallel cortical processing streams that obey computationally complementary properties. These streams interact at multiple levels to overcome their complementary weaknesses and to transform their complementary properties into consistent percepts. The article describes how properties of complementary consistency have guided the creation of many great works of art.
The psychology of immersion and development of a quantitative measure of immersive response in games
Resumo:
This study sets out to investigate the psychology of immersion and the immersive response of individuals in relation to video and computer games. Initially, an exhaustive review of literature is presented, including research into games, player demographics, personality and identity. Play in traditional psychology is also reviewed, as well as previous research into immersion and attempts to define and measure this construct. An online qualitative study was carried out (N=38), and data was analysed using content analysis. A definition of immersion emerged, as well as a classification of two separate types of immersion, namely, vicarious immersion and visceral immersion. A survey study (N=217) verified the discrete nature of these categories and rejected the null hypothesis that there was no difference between individuals' interpretations of vicarious and visceral immersion. The primary aim of this research was to create a quantitative instrument which measures the immersive response as experienced by the player in a single game session. The IMX Questionnaire was developed using data from the initial qualitative study and quantitative survey. Exploratory Factor Analysis was carried out on data from 300 participants for the IMX Version 1, and Confirmatory Factor Analysis was conducted on data from 380 participants on the IMX Version 2. IMX Version 3 was developed from the results of these analyses. This questionnaire was found to have high internal consistency reliability and validity.
Resumo:
This thesis is focused on the application of numerical atomic basis sets in studies of the structural, electronic and transport properties of silicon nanowire structures from first-principles within the framework of Density Functional Theory. First we critically examine the applied methodology and then offer predictions regarding the transport properties and realisation of silicon nanowire devices. The performance of numerical atomic orbitals is benchmarked against calculations performed with plane waves basis sets. After establishing the convergence of total energy and electronic structure calculations with increasing basis size we have shown that their quality greatly improves with the optimisation of the contraction for a fixed basis size. The double zeta polarised basis offers a reasonable approximation to study structural and electronic properties and transferability exists between various nanowire structures. This is most important to reduce the computational cost. The impact of basis sets on transport properties in silicon nanowires with oxygen and dopant impurities have also been studied. It is found that whilst transmission features quantitatively converge with increasing contraction there is a weaker dependence on basis set for the mean free path; the double zeta polarised basis offers a good compromise whereas the single zeta basis set yields qualitatively reasonable results. Studying the transport properties of nanowire-based transistor setups with p+-n-p+ and p+-i-p+ doping profiles it is shown that charge self-consistency affects the I-V characteristics more significantly than the basis set choice. It is predicted that such ultrascaled (3 nm length) transistors would show degraded performance due to relatively high source-drain tunnelling currents. Finally, it is shown the hole mobility of Si nanowires nominally doped with boron decreases monotonically with decreasing width at fixed doping density and increasing dopant concentration. Significant mobility variations are identified which can explain experimental observations.
Resumo:
The aim of this project is to integrate neuronal cell culture with commercial or in-house built micro-electrode arrays and MEMS devices. The resulting device is intended to support neuronal cell culture on its surface, expose specific portions of a neuronal population to different environments using microfluidic gradients and stimulate/record neuronal electrical activity using micro-electrode arrays. Additionally, through integration of chemical surface patterning, such device can be used to build neuronal cell networks of specific size, conformation and composition. The design of this device takes inspiration from the nervous system because its development and regeneration are heavily influenced by surface chemistry and fluidic gradients. Hence, this device is intended to be a step forward in neuroscience research because it utilizes similar concepts to those found in nature. The large part of this research revolved around solving technical issues associated with integration of biology, surface chemistry, electrophysiology and microfluidics. Commercially available microelectrode arrays (MEAs) are mechanically and chemically brittle making them unsuitable for certain surface modification and micro-fluidic integration techniques described in the literature. In order to successfully integrate all the aspects into one device, some techniques were heavily modified to ensure that their effects on MEA were minimal. In terms of experimental work, this thesis consists of 3 parts. The first part dealt with characterization and optimization of surface patterning and micro-fluidic perfusion. Through extensive image analysis, the optimal conditions required for micro-contact printing and micro-fluidic perfusion were determined. The second part used a number of optimized techniques and successfully applied these to culturing patterned neural cells on a range of substrates including: Pyrex, cyclo-olefin and SiN coated Pyrex. The second part also described culturing neurons on MEAs and recording electrophysiological activity. The third part of the thesis described integration of MEAs with patterned neuronal culture and microfluidic devices. Although integration of all methodologies proved difficult, a large amount of data relating to biocompatibility, neuronal patterning, electrophysiology and integration was collected. Original solutions were successfully applied to solve a number of issues relating to consistency of micro printing and microfluidic integration leading to successful integration of techniques and device components.
Resumo:
Irish monitoring data on PCDD/Fs, DL-PCBs and Marker PCBs were collated and combined with Irish Adult Food Consumption Data, to estimate dietary background exposure of Irish adults to dioxins and PCBs. Furthermore, all available information on the 2008 Irish pork dioxin food contamination incident was collated and analysed with a view to evaluate any potential impact the incident may have had on general dioxin and PCB background exposure levels estimated for the adult population in Ireland. The average upperbound daily intake of Irish adults to dioxins Total WHO TEQ (2005) (PCDD/Fs & DLPCBs) from environmental background contamination, was estimated at 0.3 pg/kg bw/d and at the 95th percentile at 1 pg/kg bw/d. The average upperbound daily intake of Irish adults to the sum of 6 Marker PCBs from environmental background contamination ubiquitous in the environment was estimated at 1.6 ng/kg bw/d and at the 95th percentile at 6.8 ng/kg bw/d. Dietary background exposure estimates for both dioxins and PCBs indicate that the Irish adult population has exposures below the European average, a finding which is also supported by the levels detected in breast milk of Irish mothers. Exposure levels are below health based guidance values and/or Body Burdens associated with the TWI (for dioxins) or associated with a NOAEL (for PCBs). Given the current toxicological knowledge, based on biomarker data and estimated dietary exposure, general background exposure of the Irish adult population to dioxins and PCBs is of no human health concern. In 2008, a porcine fat sample taken as part of the national residues monitoring programme led to the detection of a major feed contamination incidence in the Republic of Ireland. The source of the contamination was traced back to the use of contaminated oil in a direct-drying feed operation system. Congener profiles in animal fat and feed samples showed a high level of consistency and pinpointed the likely source of fuel contamination to be a highly chlorinated commercial PCB mixture. To estimate additional exposure to dioxins and PCBs due to the contamination of pig and cattle herds, collection and a systematic review of all data associated with the contamination incident was conducted. A model was devised that took into account the proportion of contaminated product reaching the final consumer during the 90 day contamination incident window. For a 90 day period, the total additional exposure to Total TEQ (PCDD/F &DL-PCB) WHO (2005) amounted to 407 pg/kg bw/90d at the 95th percentile and 1911 pg/kg bw/90d at the 99th percentile. Exposure estimates derived for both dioxins and PCBs showed that the Body Burden of the general population remained largely unaffected by the contamination incident and approximately 10 % of the adult population in Ireland was exposed to elevated levels of dioxins and PCBs. Whilst people in this 10 % cohort experienced quite a significant additional load to the existing body burden, the estimated exposure values do not indicate approximation of body burdens associated with adverse health effects, based on current knowledge. The exposure period was also limited in time to approximately 3 months, following the FSAI recall of contaminated meat immediately on detection of the contamination. A follow up breast milk study on Irish first time mothers conducted in 2009/2010 did not show any increase in concentrations compared to the study conducted in 2002. The latter supports the conclusion that the majority of the Irish adult population was not affected by the contamination incident.
Resumo:
Recent years have witnessed a rapid growth in the demand for streaming video over the Internet, exposing challenges in coping with heterogeneous device capabilities and varying network throughput. When we couple this rise in streaming with the growing number of portable devices (smart phones, tablets, laptops) we see an ever-increasing demand for high-definition videos online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide us with graceful changes in video quality, all while respecting our viewing satisfaction. In this context the use of well-known scalable media streaming techniques, commonly known as scalable coding, is an attractive solution and the focus of this thesis. In this thesis we investigate the transmission of existing scalable video models over a lossy network and determine how the variation in viewable quality is affected by packet loss. This work focuses on leveraging the benefits of scalable media, while reducing the effects of data loss on achievable video quality. The overall approach is focused on the strategic packetisation of the underlying scalable video and how to best utilise error resiliency to maximise viewable quality. In particular, we examine the manner in which scalable video is packetised for transmission over lossy networks and propose new techniques that reduce the impact of packet loss on scalable video by selectively choosing how to packetise the data and which data to transmit. We also exploit redundancy techniques, such as error resiliency, to enhance the stream quality by ensuring a smooth play-out with fewer changes in achievable video quality. The contributions of this thesis are in the creation of new segmentation and encapsulation techniques which increase the viewable quality of existing scalable models by fragmenting and re-allocating the video sub-streams based on user requirements, available bandwidth and variations in loss rates. We offer new packetisation techniques which reduce the effects of packet loss on viewable quality by leveraging the increase in the number of frames per group of pictures (GOP) and by providing equality of data in every packet transmitted per GOP. These provide novel mechanisms for packetizing and error resiliency, as well as providing new applications for existing techniques such as Interleaving and Priority Encoded Transmission. We also introduce three new scalable coding models, which offer a balance between transmission cost and the consistency of viewable quality.