931 resultados para Encoding (symbols)


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis develops a detailed conceptual design method and a system software architecture defined with a parametric and generative evolutionary design system to support an integrated interdisciplinary building design approach. The research recognises the need to shift design efforts toward the earliest phases of the design process to support crucial design decisions that have a substantial cost implication on the overall project budget. The overall motivation of the research is to improve the quality of designs produced at the author's employer, the General Directorate of Major Works (GDMW) of the Saudi Arabian Armed Forces. GDMW produces many buildings that have standard requirements, across a wide range of environmental and social circumstances. A rapid means of customising designs for local circumstances would have significant benefits. The research considers the use of evolutionary genetic algorithms in the design process and the ability to generate and assess a wider range of potential design solutions than a human could manage. This wider ranging assessment, during the early stages of the design process, means that the generated solutions will be more appropriate for the defined design problem. The research work proposes a design method and system that promotes a collaborative relationship between human creativity and the computer capability. The tectonic design approach is adopted as a process oriented design that values the process of design as much as the product. The aim is to connect the evolutionary systems to performance assessment applications, which are used as prioritised fitness functions. This will produce design solutions that respond to their environmental and function requirements. This integrated, interdisciplinary approach to design will produce solutions through a design process that considers and balances the requirements of all aspects of the design. Since this thesis covers a wide area of research material, 'methodological pluralism' approach was used, incorporating both prescriptive and descriptive research methods. Multiple models of research were combined and the overall research was undertaken following three main stages, conceptualisation, developmental and evaluation. The first two stages lay the foundations for the specification of the proposed system where key aspects of the system that have not previously been proven in the literature, were implemented to test the feasibility of the system. As a result of combining the existing knowledge in the area with the newlyverified key aspects of the proposed system, this research can form the base for a future software development project. The evaluation stage, which includes building the prototype system to test and evaluate the system performance based on the criteria defined in the earlier stage, is not within the scope this thesis. The research results in a conceptual design method and a proposed system software architecture. The proposed system is called the 'Hierarchical Evolutionary Algorithmic Design (HEAD) System'. The HEAD system has shown to be feasible through the initial illustrative paper-based simulation. The HEAD system consists of the two main components - 'Design Schema' and the 'Synthesis Algorithms'. The HEAD system reflects the major research contribution in the way it is conceptualised, while secondary contributions are achieved within the system components. The design schema provides constraints on the generation of designs, thus enabling the designer to create a wide range of potential designs that can then be analysed for desirable characteristics. The design schema supports the digital representation of the human creativity of designers into a dynamic design framework that can be encoded and then executed through the use of evolutionary genetic algorithms. The design schema incorporates 2D and 3D geometry and graph theory for space layout planning and building formation using the Lowest Common Design Denominator (LCDD) of a parameterised 2D module and a 3D structural module. This provides a bridge between the standard adjacency requirements and the evolutionary system. The use of graphs as an input to the evolutionary algorithm supports the introduction of constraints in a way that is not supported by standard evolutionary techniques. The process of design synthesis is guided as a higher level description of the building that supports geometrical constraints. The Synthesis Algorithms component analyses designs at four levels, 'Room', 'Layout', 'Building' and 'Optimisation'. At each level multiple fitness functions are embedded into the genetic algorithm to target the specific requirements of the relevant decomposed part of the design problem. Decomposing the design problem to allow for the design requirements of each level to be dealt with separately and then reassembling them in a bottom up approach reduces the generation of non-viable solutions through constraining the options available at the next higher level. The iterative approach, in exploring the range of design solutions through modification of the design schema as the understanding of the design problem improves, assists in identifying conflicts in the design requirements. Additionally, the hierarchical set-up allows the embedding of multiple fitness functions into the genetic algorithm, each relevant to a specific level. This supports an integrated multi-level, multi-disciplinary approach. The HEAD system promotes a collaborative relationship between human creativity and the computer capability. The design schema component, as the input to the procedural algorithms, enables the encoding of certain aspects of the designer's subjective creativity. By focusing on finding solutions for the relevant sub-problems at the appropriate levels of detail, the hierarchical nature of the system assist in the design decision-making process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a "dense" form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a "coarse" form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Chlamydia pecorum is an obligate intracellular bacterium and the causative agent of reproductive and ocular disease in several animal hosts including koalas, sheep, cattle and goats. C. pecorum strains detected in koalas are genetically diverse, raising interesting questions about the origin and transmission of this species within koala hosts. While the ompA gene remains the most widely-used target in C. pecorum typing studies, it is generally recognised that surface protein encoding genes are not suited for phylogenetic analysis and it is becoming increasingly apparent that the ompA gene locus is not congruent with the phylogeny of the C. pecorum genome. Using the recently sequenced C. pecorum genome sequence (E58), we analysed 10 genes, including ompA, to evaluate the use of ompA as a molecular marker in the study of koala C. pecorum genetic diversity. Results Three genes (incA, ORF663, tarP) were found to contain sufficient nucleotide diversity and discriminatory power for detailed analysis and were used, with ompA, to genotype 24 C. pecorum PCR-positive koala samples from four populations. The most robust representation of the phylogeny of these samples was achieved through concatenation of all four gene sequences, enabling the recreation of a "true" phylogenetic signal. OmpA and incA were of limited value as fine-detailed genetic markers as they were unable to confer accurate phylogenetic distinctions between samples. On the other hand, the tarP and ORF663 genes were identified as useful "neutral" and "contingency" markers respectively, to represent the broad evolutionary history and intra-species genetic diversity of koala C. pecorum. Furthermore, the concatenation of ompA, incA and ORF663 sequences highlighted the monophyletic nature of koala C. pecorum infections by demonstrating a single evolutionary trajectory for koala hosts that is distinct from that seen in non-koala hosts. Conclusions While the continued use of ompA as a fine-detailed molecular marker for epidemiological analysis appears justified, the tarP and ORF663 genes also appear to be valuable markers of phylogenetic or biogeographic divisions at the C. pecorum intra-species level. This research has significant implications for future typing studies to understand the phylogeny, genetic diversity, and epidemiology of C. pecorum infections in the koala and other animal species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Listening is the basic and complementary skill in second language learning. The term listening is used in language teaching to refer to a complex process that allows us to understand spoken language. Listening, the most widely used language skill, is often used in conjunction with the other skills of speaking, reading and writing. Listening is not only a skill area in primary language performance (L1), but is also a critical means of acquiring a second language (L2). Listening is the channel in which we process language in real time – employing pacing, units of encoding and decoding (the 2 processes are central to interpretation and meaning making) and pausing (allows for reflection) that are unique to spoken language. Despite the wide range of areas investigated in listening strategies during training, there is a lack of research looking specifically at how effectively L1 listening strategy training may transfer to L2. To investigate the development of any such transfer patterns the instructional design and implementation of listening strategy of L1 will be critical.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Expert knowledge is used widely in the science and practice of conservation because of the complexity of problems, relative lack of data, and the imminent nature of many conservation decisions. Expert knowledge is substantive information on a particular topic that is not widely known by others. An expert is someone who holds this knowledge and who is often deferred to in its interpretation. We refer to predictions by experts of what may happen in a particular context as expert judgments. In general, an expert-elicitation approach consists of five steps: deciding how information will be used, determining what to elicit, designing the elicitation process, performing the elicitation, and translating the elicited information into quantitative statements that can be used in a model or directly to make decisions. This last step is known as encoding. Some of the considerations in eliciting expert knowledge include determining how to work with multiple experts and how to combine multiple judgments, minimizing bias in the elicited information, and verifying the accuracy of expert information. We highlight structured elicitation techniques that, if adopted, will improve the accuracy and information content of expert judgment and ensure uncertainty is captured accurately. We suggest four aspects of an expert elicitation exercise be examined to determine its comprehensiveness and effectiveness: study design and context, elicitation design, elicitation method, and elicitation output. Just as the reliability of empirical data depends on the rigor with which it was acquired so too does that of expert knowledge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Older people often struggle with using contemporary products and interfaces. They show slower, less intuitive interaction with more errors. This paper reports on a large project designed to investigate why older people have these difficulties and what strategies could be used to mitigate them. The project team found that older people are less familiar with products that they own than younger ones, while both older and middle aged people are less familiar with products that they do not own than younger ones. Age related cognitive decline is also related to slower and less intuitive performance with contemporary products and interfaces. Therefore, the reasons behind the problems that older people demonstrate with contemporary technologies involve a mix of familiarity and capability. Redundancy applied to an interface in the form of symbols and words is helpful for middle aged and younger old people but the oldest age group performed better with a words only interface. Also, older people showed faster and more intuitive use with a flat interface than a nested one, although there was no difference in errors. Further work is ongoing in order to establish ways in which these findings can be usefully applied in the design process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing popularity of video consumption from mobile devices requires an effective video coding strategy. To overcome diverse communication networks, video services often need to maintain sustainable quality when the available bandwidth is limited. One of the strategy for a visually-optimised video adaptation is by implementing a region-of-interest (ROI) based scalability, whereby important regions can be encoded at a higher quality while maintaining sufficient quality for the rest of the frame. The result is an improved perceived quality at the same bit rate as normal encoding, which is particularly obvious at the range of lower bit rate. However, because of the difficulties of predicting region-of-interest (ROI) accurately, there is a limited research and development of ROI-based video coding for general videos. In this paper, the phase spectrum quaternion of Fourier Transform (PQFT) method is adopted to determine the ROI. To improve the results of ROI detection, the saliency map from the PQFT is augmented with maps created from high level knowledge of factors that are known to attract human attention. Hence, maps that locate faces and emphasise the centre of the screen are used in combination with the saliency map to determine the ROI. The contribution of this paper lies on the automatic ROI detection technique for coding a low bit rate videos which include the ROI prioritisation technique to give different level of encoding qualities for multiple ROIs, and the evaluation of the proposed automatic ROI detection that is shown to have a close performance to human ROI, based on the eye fixation data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Learning to think spatially in mathematics involves developing proficiency with graphics. This paper reports on 2 investigations of spatial thinking and graphics. The first investigation explored the importance of graphics as 1 of 3 communication systems (i.e. text, symbols, graphics) used to provide information in numeracy test items. The results showed that graphics were embedded in at least 50 % of test items across 3 year levels. The second investigation examined 11 – 12-year-olds’ performance on 2 mathematical tasks which required substantial interpretation of graphics and spatial thinking. The outcomes revealed that many students lacked proficiency in the basic spatial skills of visual memory and spatial perception and the more advanced skills of spatial orientation and spatial visualisation. This paper concludes with a reaffirmation of the importance of spatial thinking in mathematics and proposes ways to capitalize on graphics in learning to think spatially.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Graphical tasks have become a prominent aspect of mathematics assessment. From a conceptual stance, the purpose of this study was to better understand the composition of graphical tasks commonly used to assess students’ mathematics understandings. Through an iterative design, the investigation described the sense making of 11–12-year-olds as they decoded mathematics tasks which contained a graphic. An ongoing analysis of two phases of data collection was undertaken as we analysed the extent to which various elements of text, graphics, and symbols influenced student sense making. Specifically, the study outlined the changed behaviour (and performance) of the participants as they solved graphical tasks that had been modified with respect to these elements. We propose a theoretical framework for understanding the composition of a graphical task and identify three specific elements which are dependently and independently related to each other, namely: the graphic; the text; and the symbols. Results indicated that although changes to the graphical tasks were minimal, a change in student success and understanding was most evident when the graphic element was modified. Implications include the need for test designers to carefully consider the graphics embedded within mathematics tasks since the elements within graphical tasks greatly influence student understanding.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background We have investigated the possibility and feasibility of producing the HPV-11 L1 major capsid protein in transgenic Arabidopsis thaliana ecotype Columbia and Nicotiana tabacum cv. Xanthi as potential sources for an inexpensive subunit vaccine. Results Transformation of plants was only achieved with the HPV-11 L1 gene with the C-terminal nuclear localization signal (NLS-) encoding region removed, and not with the full-length gene. The HPV-11 L1 NLS- gene was stably integrated and inherited through several generations of transgenic plants. Plant-derived HPV-11 L1 protein was capable of assembling into virus-like particles (VLPs), although resulting particles displayed a pleomorphic phenotype. Neutralising monoclonal antibodies binding both surface-linear and conformation-specific epitopes bound the A. thaliana-derived particles and - to a lesser degree - the N. tabacum-derived particles, suggesting that plant-derived and insect cell-derived VLPs displayed similar antigenic properties. Yields of up to 12 μg/g of HPV-11 L1 NLS- protein were harvested from transgenic A. thaliana plants, and 2 μg/g from N. tabacum plants - a significant increase over previous efforts. Immunization of New Zealand white rabbits with ∼50 μg of plant-derived HPV-11 L1 NLS- protein induced an antibody response that predominantly recognized insect cell-produced HPV-11 L1 NLS- and not NLS+ VLPs. Evaluation of the same sera concluded that none of them were able to neutralise pseudovirion in vitro. Conclusion We expressed the wild-type HPV-11 L1 NLS- gene in two different plant species and increased yields of HPV-11 L1 protein by between 500 and 1000-fold compared to previous reports. Inoculation of rabbits with extracts from both plant types resulted in a weak immune response, and antisera neither reacted with native HPV-11 L1 VLPs, nor did they neutralise HPV-11 pseudovirion infectivity. This has important and potentially negative implications for the production of HPV-11 vaccines in plants. © 2007 Kohl et al; licensee BioMed Central Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We constructed a novel autonomously replicating gene expression shuttle vector, with the aim of developing a system for transiently expressing proteins at levels useful for commercial production of vaccines and other proteins in plants. The vector, pRIC, is based on the mild strain of the geminivirus Bean yellow dwarf virus (BeYDV-m) and is replicationally released into plant cells from a recombinant Agrobacterium tumefaciens Ti plasmid. pRIC differs from most other geminivirus-based vectors in that the BeYDV replication-associated elements were included in cis rather than from a co-transfected plasmid, while the BeYDV capsid protein (CP) and movement protein (MP) genes were replaced by an antigen encoding transgene expression cassette derived from the non-replicating A. tumefaciens vector, pTRAc. We tested vector efficacy in Nicotiana benthamiana by comparing transient cytoplasmic expression between pRIC and pTRAc constructs encoding either enhanced green fluorescent protein (EGFP) or the subunit vaccine antigens, human papillomavirus subtype 16 (HPV-16) major CP L1 and human immunodeficiency virus subtype C p24 antigen. The pRIC constructs were amplified in planta by up to two orders of magnitude by replication, while 50% more HPV-16 L1 and three- to seven-fold more EGFP and HIV-1 p24 were expressed from pRIC than from pTRAc. Vector replication was shown to be correlated with increased protein expression. We anticipate that this new high-yielding plant expression vector will contribute towards the development of a viable plant production platform for vaccine candidates and other pharmaceuticals. © 2009 Blackwell Publishing Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

HIV-1 Pr55 Gag virus-like particles (VLPs) are strong immunogens with potential as candidate HIV vaccines. VLP immunogenicity can be broadened by making chimaeric Gag molecules: however, VLPs incorporating polypeptides longer than 200 aa fused in frame with Gag have not yet been reported. We constructed a range of gag-derived genes encoding in-frame C-terminal fusions of myristoylation-competent native Pr55Gag and p6-truncated Gag (Pr50Gag) to test the effects of polypeptide length and sequence on VLP formation and morphology, in an insect cell expression system. Fused sequences included a modified reverse transcriptase-Tat-Nef fusion polypeptide (RTTN, 778 aa), and truncated versions of RTTN ranging from 113 aa to 450 aa. Baculovirus-expressed chimaeric proteins were examined by western blot and electron microscopy. All chimaeras formed VLPs which could be purified by sucrose gradient centrifugation. VLP diameter increased with protein MW, from ∼100 nm for Pr55Gag to ∼250 nm for GagRTTN. The presence or absence of the Gag p6 region did not obviously affect VLP formation or appearance. GagRT chimaeric particles were successfully used in mice to boost T-cell responses to Gag and RT that were elicited by a DNA vaccine encoding a GagRTTN polypeptide, indicating the potential of such chimaeras to be used as candidate HIV vaccines. © 2008 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Psittacine beak and feather disease (PBFD) has a broad host range and is widespread in wild and captive psittacine populations in Asia, Africa, the Americas, Europe and Australasia. Beak and feather disease circovirus (BFDV) is the causative agent. BFDV has an ~2 kb single stranded circular DNA genome encoding just two proteins (Rep and CP). In this study we provide support for demarcation of BFDV strains by phylogenetic analysis of 65 complete genomes from databases and 22 new BFDV sequences isolated from infected psittacines in South Africa. We propose 94% genome-wide sequence identity as a strain demarcation threshold, with isolates sharing > 94% identity belonging to the same strain, and strain subtypes sharing> 98% identity. Currently, BFDV diversity falls within 14 strains, with five highly divergent isolates from budgerigars probably representing a new species of circovirus with three strains (budgerigar circovirus; BCV-A, -B and -C). The geographical distribution of BFDV and BCV strains is strongly linked to the international trade in exotic birds; strains with more than one host are generally located in the same geographical area. Lastly, we examined BFDV and BCV sequences for evidence of recombination, and determined that recombination had occurred in most BFDV and BCV strains. We established that there were two globally significant recombination hotspots in the viral genome: the first is along the entire intergenic region and the second is in the C-terminal portion of the CP ORF. The implications of our results for the taxonomy and classification of circoviruses are discussed. © 2011 SGM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A fundamental problem faced by stereo matching algorithms is the matching or correspondence problem. A wide range of algorithms have been proposed for the correspondence problem. For all matching algorithms, it would be useful to be able to compute a measure of the probability of correctness, or reliability of a match. This paper focuses in particular on one class for matching algorithms, which are based on the rank transform. The interest in these algorithms for stereo matching stems from their invariance to radiometric distortion, and their amenability to fast hardware implementation. This work differs from previous work in that it derives, from first principles, an expression for the probability of a correct match. This method was based on an enumeration of all possible symbols for matching. The theoretical results for disparity error prediction, obtained using this method, were found to agree well with experimental results. However, disadvantages of the technique developed in this chapter are that it is not easily applicable to real images, and also that it is too computationally expensive for practical window sizes. Nevertheless, the exercise provides an interesting and novel analysis of match reliability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

New Voices, New Visions brings together a collection of papers that engage with the ideas of nation, identity and place. The title New Voices, New Visions harks back to earlier scholarship that endeavoured to explore these issues. It therefore makes links between old and new stories of Australian identity, tracing the continuities, shifts and changes in how Australia is imagined. The collection is deliberately interdisciplinary, gathering work by historians, literary and film scholars, communication and cultural theorists, political scientists and sociologists. This mixed perspectives enables the reader to trace ideas, concepts and theories across a range of disciplines and understand the distinctive ways in which different disciplines engage with ideas of nation, space and Australian identity. The book is written in an engaging and accessible manner, making it an excellent text for undergraduate and postgraduate students in the fields of Australian Studies. It will be especially useful for the growing number of students living outside Australia who engage with Australian literature and culture. The book provides a range of topics that introduces students to key issues and concepts. It also situates these ideas in historical context. New Voices, New Visions engages with key contemporary issues in everyday Australian life: environment and climate change, immigration, consumerism, travel and cities. It explores these various topics by considering case studies, both contemporary and historical. For example the issue of attitudes to Asia are analysed through art; the topic of national symbols through the case of the crocodile; approaches to immigration via a popular reality television programme.